Showing posts with label software-engineering. Show all posts
Showing posts with label software-engineering. Show all posts

Friday, April 29, 2022

Prerelease Errors Yield Post-Release Errors

Graph of future uncertainty.

Software with a lot of prerelease errors will also have a lot of post-release errors. This is bad news for developers, but fully supported by empirical data*. The more errors you find in a product, the more you will still find. The best advice is to replace any component with a poor history. Don't throw good money after bad.

If your organization keeps impeccable records then you could use Bayes' theorem to come up with a probability of the number of errors still left in a software component by comparing the current component to a larger population of similar components. More than likely you don't have that kind of data to draw from so you can use this pessimistic heuristic:

How ever many errors have been fixed in the component before release is how many more errors you can expect to still have.

A more optimistic and, hopefully, more accurate approach would be to apply the above heuristic to a quarter-to-quarter or month-to-month timeline. I'm suggesting basic trend projection forecasting so you could get much more creative if you wanted to. However, to my experience, businesses don't want to spend the time and effort needed to gather and process data to do anything more complex. And, there are concerns about employee satisfaction if you go down this rabbit hole because eventually you'll start tying error rates to certain combinations of people on development teams. The slope gets slippery and ROI diminishes. K. I. S. S.

Back to trend project forecasting, when your software reaches the point where your measurement period has zero errors, don't declare the software free of bugs. Instead adjust your measurement period. For example, last month no errors were found so it is time to switch to measuring quarters and so on. This is useful when a stakeholder asks how many bugs are left in the software, and they will ask. You can say, "We don't expect to find any new errors next month but it's probable that we'll find X new errors over the next quarter." If you are really on top of your game, you'll track the variance of errors from period-to-period then you can provide a min-max range (see the graph above).


* Software Defect Removal by Robert Dunn (1984).

Thursday, April 14, 2022

Fix Problems, Not Symptoms


When software fails, you have an obligation to fully understand the cause of the failure, not just to do a cursory analysis and apply a quick fix to what you think is the cause.

Suppose you are trying to trace the cause of a software failure. You have noticed that every time a specific component transmits a value, it is exactly twice the desired value. A quick and dirty solution is to divide the generated value by two just before it is transmitted. This solution is inappropriate because 1) it may not work for all cases, and 2) it leaves the program with what is essentially two errors that compensate for each other, rendering the program virtually unmaintainable in the future. An even worse quick and dirty solution is for the recipient to divide the value it receives by two before using it. This solution has all these problems associated with the first one, plus it causes all future components that invoke the faulty component to receive the wrong value. The correct solution is to examine the program and determine why the value is consistently doubled; then fix it.

Unfortunately, you are going to deal with software that has the two bad ways of fixing errors listed above in them. You'll know it because your coworkers will warn you that making unrequested fixes might break other things. This is why it is so important to report issues and getting sign-off from everyone involved before changing code.

Reference:

McConnell, S., Code Complete, Redmond, WA: Microsoft Press, 1993.

Tuesday, March 22, 2022

Principles of Distributed System Design

Three garage bays to represent a distributed system.

Every day software engineers face the task of designing new systems or maintaining existing systems. Whether the need to make those systems distributed is due to performance or reliability requirements it hardly matters. Distributed system design needs to be considered and broken into a limited number of principles to adequately assess the tradeoffs and costs.

Below are 10 principles of distributed system design that I think do a good job summarizing and separating the problem. These are the principles that Amazon used when designing their S3 service (see reference at bottom).

▸ Decentralization: Use fully decentralized techniques to remove scaling bottlenecks and single points of failure.

▸ Asynchrony: The system makes progress under all circumstances.

▸ Autonomy: The system is designed such that individual components can make decisions based on local information.

▸ Local responsibility: Each individual component is responsible for achieving its consistency; this is never the burden of its peers.

▸ Controlled concurrency: Operations are designed such that no or limited concurrency control is required.

▸ Failure tolerant: The system considers the failure of components to be a normal mode of operation and continues operation with no or minimal interruption.

▸ Controlled parallelism: Abstractions used in the system are of such granularity that parallelism can be used to improve performance and robustness of recovery or the introduction of new nodes.

▸ Decompose into small, well-understood building blocks: Do not try to provide a single service that does everything for everyone, but instead build small components that can be used as building blocks for other services.

▸ Symmetry: Nodes in the system are identical in terms of functionality, and require no or minimal node-specific configuration to function.

▸ Simplicity: The system should be made as simple as possible, but no simpler.


Reference:

Amazon Web Services Launches

Saturday, March 05, 2022

If it ain't broke, don't fix it

A photo of the electronics inside of a microwave.

Of course the advice of "if it ain't broke, don't fix it" is applicable to many aspects of life, but it is particularly applicable to software. By its very name, software is considered malleable, easily modified. Don't be fooled into thinking that it is either easy to see or repair a "break" in software.

Suppose you are maintaining a system. You are examining the source code of a component. You are either trying to enhance it or seeking the cause of an error. While examining it, you detect what you believe is another error. Do not try to "repair" it. The probability is very high that you will introduce an error, not fix one. Instead, file a change request. Hopefully, the configuration control and associated technical reviews will determine if it is an error and what priority its repair should be given.


Reference:
Reagan, R., More Programming Pearls, Reading, MA: Addison-Wesley, 1988.

Tuesday, March 01, 2022

Validation and Verification


Large software developments need as many checks and balances as possible to ensure a quality product. One proven technique is the use of an organization independent of the development team to conduct validation and verification (V&V). Validation is the process of examining each intermediate product of software development to ensure that it conforms to the previous product. For example, validation ensures that software requirements meet system requirements, that high-level software design satisfies all the software requirements (and none other), that algorithms satisfy the component's external specification, that code implements the algorithms, and so on. Verification is the process of examining each intermediate product of software development to ensure that it satisfies the requirements.

You can think of V&V as a solution to the children's game of telephone. In telephone, a group of children form a chain and a specific oral message is whispered down the line. At the end, the last child tells what he or she heard, and it is rarely the same as the initial message. Validation would cause each child to ask the previous child, "Did you say x?" Verification would cause each child to ask the first child, "Did you say x?"

On a project, V&V should be planned early. It can be documented in the quality assurance plan or it can exist in a separate V&V plan. In either case, its procedures, players, actions, and results should be approved at roughly the same time the software requirements specification is approved.


Reference:

Wallace, D. and Fujii, R., "Software Verification and Validation: An Overview," IEEE Software, May 1989.

Sunday, February 13, 2022

Twitter Card Meta Tags

My girlfriend likes to say that I'm, "such a fanboy" of Mary Roach. Maybe so. I was going to make a Twitter post referencing Mary's website and I noticed that there was no twitter card! đŸ™€

The Twitter Card Validator showing that maryroach.net doesn't have any twitter meta tags.

So, I'm going to use my tech skills and Mary Roach's website to [hopefully] do some good and provide a walkthrough on making a better website. Yeah, yeah, I know what you are thinking. Someone using blogspot can't talk about making a better website. My response to that is that it is an example of finding balance between cost, effort/time, and quality - the holy grail in software engineering.

I love the look of Mary's website. Simple, clean design with easy to read source code. But I've noticed a couple things that I'm confident that she'll want to address:

  • No twitter card meta tags.
  • No favicon.

Twitter Card Meta Tags

Mary has been very active on Twitter lately. Or maybe she was always active and I've only recently noticed. Either way it is clearly a social media platform that she cares about and as such it would be really nice if when a tweet includes her website URL that the appropriate twitter:card would display.

To do this Mary will need to ask her website company, Coconut Moon, to add the following meta tags to each of her website pages:
  • <meta name="twitter:card" content="summary_large_image">
  • <meta name="twitter:creator" content="@mary_roach">
  • <meta name="twitter:description" content="Mary Roach, Author of Fuzz, Grunt, Packing for Mars, Stiff, Spook and Bonk.">
    • The content here should be different on every webpage. Ideally, it would be different from the meta description tag too. BTW, Mary, you're missing that tag too and you'll want it to improve your SEO score/ranking. This tag is an opportunity to make custom descriptions just for Twitter.
  • <meta name="twitter:title" content="Mary Roach, Author of Fuzz, Grunt, Packing for Mars, Stiff, Spook and Bonk">
    • Again, the title should be different on every webpage. Side note: Mary, I'm so sad to see that you didn't use an Oxford comma in the title of your index page... I've read Spook and Bonk but have yet to find a copy of Spook and Bonk. Before you say anything, yes, I have heard "Oxford Comma" by Vampire Weekend. I love their sound.


  • <meta name="twitter:image" content="https://maryroach.net/images/books/Fuzz_350.jpg">
    • I'm going to keep saying it - use a different image for every webpage and if you are going to use an og:image tag for FaceBook as well then take the opportunity to make it different, make it special and specific to the social media platform. The users there will notice. Like a rockstar yelling out the name of a city before a concert.
  • <meta name="twitter:image:alt" content="Book cover for Fuzz: When Nature Breaks the Law. It has an arrow head style logo clearly intended to resemble the National Park Service logo with a mountain lion, a bear, and an elephant on it with pine trees behind them and birds above.">
    • I would love to feed you some story about how this content will help your website rank better. It will. But that is not why you should add it. You should do it because our world is tough enough for the blind and if a blind person comes to your website then help them out. Mary, the alt tags on your website are all one word descriptions of the images. You can do better. Please do better.
There are other tags but these are the ones that really matter. The rest can be found at https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/markup.

I almost forgot but it is very useful. Use the Twitter Card Validator to test your webpages.

Favicon

A favicon is a small icon that serves as branding for your website. Its main purpose is to help visitors locate your page when they have multiple tabs open. 

Creating a favicon is a small but important step to setting up a website. It adds legitimacy to your site and helps boost your online branding as well as trust from users. Favicons are an immediate visual marker for the website which enables easy and quick identification for users.
  • <link rel="icon" href="/favicon.png" type="image/x-icon">
    • This is one case where you should use the same image for your entire site.

Lighthouse

To go a bit further, I want to also recommend to anyone reading this to improve their website to use Google Lighthouse to analyze your website. Do it for both desktop and mobile. It'll give a high-level scoring for each fundamental area of your website and detailed recommendations to improve those scores.

Google Lighthouse report of maryroach.net. Performance, accessibility, and SEO are above 90 but best practices was 62.


Wednesday, January 26, 2022

Keep Track of Every Change

Sunset on Missouri river at Hermann, MO with court house on the left and bridge spanning the river on the right.

Every change has the potential to cause problems. Three common problems are:

  1. The change did not fix the problem for which it was intended.
  2. The change fixed the problem but caused others.
  3. At a future date, the change is noticed and nobody can figure out why it was made or by whom.
In all three cases the prevention is to track every change. Tracking entails recording:
  • The original request for change. This might be a customer's request for a new feature, a customer's complaint about a malfunction, a developer's detection of a problem, or a developer's desire to add a new feature.
  • The approval process used to approve the change (who, when, why, in what release).
  • The changes to all intermediate products (who, what, when).
  • Appropriate cross-references among the change request, change approval, and changes themselves.
Such an audit trail enables you to easily back out, redo, and/or understand changes.


Reference:
Bersoff, E., Henderson, and Siegel, Software Configuration Management, Englewood Cliffs, NJ: Prentice Hall, 1980.

Wednesday, December 22, 2021

Control Baselines

Software Configuration Management

It is the responsibility of software configuration management (SCM) to hold the agreed-upon specifications and regulate changes to them. You might not have a board in charge of SCM. I once worked as a technical product manager (TPM) and controlled the backlog for several teams. Regardless of the name, there is someone or a group that sets priorities for tasks yet to be completed.

While repairing or enhancing a software component, a software engineer will occasionally discover something else that can be changed, perhaps to fix a yet unreported bug or to add some quick new feature. This kind of uncontrolled (and untracked) change is intolerable. SCM should avoid incorporating such changes into the new baseline. The correct procedure is for the software engineer to make a change request (CR). This CR is then processed along with the others from development, marketing, testing, and the customers by a configuration control board, which prioritizes and schedules the change requests. Only then can the engineer be allowed to make the change and only then can the SCM accept the change.

Configuration control board is a bit of an old, formal name. I think most companies these days just use the representatives of the stakeholders - product managers, engineering directors, and architects.

So, this practice is immensely frustrating to the go-getter developers who see problems and want to fix them. But it is immensely practical in the sense that one man's bug is another man's expected functionality and changing things breaks expectations which leads to upset customers.


Reference:

Bersoff, E., Henderson, V., and Siegel, S., Software Configuration Management, Englewood Cliffs, NJ: Prentice Hall, 1980.

Monday, August 23, 2021

Rotate People Through Product Assurance

 

Utah Skyline

In many organizations, people are moved into product assurance teams as a first assignment or after they have demonstrated poor performance at engineering software. Product assurance, however, requires the same level of engineering quality and discipline as designing and coding. As an alternative, rotate the best engineering talent through the product assurance team. A good guideline might be that every excellent engineer spends six months in product assurance organization every two or three years. The expectation of all such engineers is that they will make significant improvements to product assurance during their "visit." Such a policy must clearly state that the job rotation is a reward for excellent performance.


Reference:

Mendis, K., "Personnel Requirements to Make Software Quality Assurance Work," in Handbook of Software Quality Assurance, New York: Van Nostrand Reinhold, 1987.

Monday, June 14, 2021

Establish Software Management Procedures Early

flower

 Effective software management is not just having a tool that records who made what change to the code or documentation. It is also the thoughtful creation of naming conventions, policies, and procedures to ensure that all relevant parties are involved in changes to the software. It means that:

  • Everyone knows how to report a problem.
  • Everyone knows how to request a new requirement.
  • All stakeholders are informed of suggested changes and their opinions are solicited.
  • Change requests are prioritized and scheduled.
  • Versioning is figured out.
These things used to be recorded in a document called the software configuration management plan (SCMP). It would be written early in a project and get approved around the same time as the software requirements specification (SRS) is approved. These days it seems that no one makes these plans. Instead expecting their tools to tell them what to do when they start a project which results in some lopsided projects. For example, most teams I know report problems using JIRA internally. Some provide JIRA accounts to their users to report problems externally but few combine the two which results in a skewed view of product. A developer view vs a user view.

I could give more examples but that would be getting away from the point:
At the start of each software project, round up the stakeholders and have them agree to how each of the above bullet points will be handled. Make special effort to ensure that there are answers for internal vs external and a way to blend the two views.


Reference:
Bersoff, E., Henderson, V., and Siegel, S., Software Configuration Management, Englewood Cliffs, NJ: Prentice Hall, 1980.

Sunday, May 16, 2021

Product Assurance Is Not a Luxury

Quality 100% Guarantee 

Product assurance includes:

  1. software configuration management
  2. software quality assurance
  3. verification and validation
  4. testing
Of the four, the one whose necessity is most often acknowledged, and under-budgeted, is testing. The other three are quite often dismissed as luxuries, as aspects of only large or expensive projects. The checks and balances these disciplines provide result in a significantly higher probability of producing a product that satisfies customer expectations and that is completed closer to schedule and cost goals. The key is to tailor the product assurance disciplines to the project in size, form, and content.


Reference:

Siegel, S., "Why We Need Checks and Balances to Assure Quality," Quality Time Column, IEEE Software, January 1992.

Saturday, April 24, 2021

Do a Project Postmortem

Pros vs Cons

At the end of every project, give all the key project members a three or four day assignment to analyze every problem that occurred during the project. For example, "We were ten days late starting integration testing. We should have told the customer." Or, "We started designing and coding before the requirements were complete." Or, "the team was already exhausted from working long hours when the CEO demotivated everyone by announcing there would be 'no raises'."

In general, the idea is to document, analyze, and learn from all the things that went wrong. Also, record what you believe could be done differently in the future to prevent it. Future projects will be improved by these project postmortems. If possible new or promoted employees should attend one or more project postmortems before becoming key members of a project.


Those who do not remember the past are condemned to relive it.

 - George Santayana, 1908

Sunday, March 14, 2021

The Thought That Disaster Is Impossible Often Leads To Disaster

Sinking of the Titanic

Gerald Weinberg's "Titanic Effect" principle is that believing that disaster is impossible often leads to disaster. You must never become so smug that you think everything is under control and will remain that way. Overconfidence is the primary cause of many disasters. It is the mountain climber who says, "It's just a small climb; I don't need a belay," or the hiker who says, "It's just a short hike; I don't need water," who gets into trouble. Reduce risk to your projects by analyzing all your potential disasters up front, develop contingency plans for them in advance, and continually reassess new risks. This principle emphasizes the need to expect these risks to become real. Your biggest management disasters will occur when you think they won't.

Perhaps if Weinberg had been writing Quality Software Management in 2020, he would have called it the "Covid-19 Effect." No matter what it is called you'll still face a very big problem. It takes costly time and effort analyze the risks and make contingency plans. Most projects will be lucky and won't face disaster and some will but they contingency plans will work and make the problems seem smaller than they actually were. It won't take long and someone, probably your company's CFO, will say, "Do we really have to do all this expensive analysis and planning?" This is the disaster that will sink many future projects if you don't plan for it in advance by documenting the problems that such planning avoids and estimating the costs had contingency plans had not been in place. Furthermore, you must make disaster prevention and recovery a company culture value that everyone believes in and ensures it's success.

Reference:

Weinberg, G., Quality Software Management, Vol. 1: Systems Thinking, New York: Dorest House, 1992.

Tuesday, February 02, 2021

Manage by Variance

 

Variance: the difference between our results and what was expected.

It is difficult, if not impossible, to manage a project without a detailed plan. Once you have a plan, update it as necessary. Now that you have an up-to-date plan, your responsibility is to manage the project according to that plan. As you report your progress, report only the discrepancies between the plan and the actuals. While the project is underway, a progress report should be, "Everything is as stated in the plan except ...." This way attention and resources can be applied to the problem areas.

Monday, January 11, 2021

Miraculous Productivity Increases

The technology industry is saturated with people who preach that this tool or that technique will reduce development cost. We all hear at business meetings and conferences about software managers claiming huge increases of 50 to 100 percent in productivity by applying tool x or language y or method z. Don't believe it! It is hype. Moderate increases in productivity of 3 to 5 percent are much more reasonable targets.

You should be happy with tools, languages, and methods that shave a few percentages off your cost or add a few percentages to your quality. However, cost reduction makes no sense without an understanding of its impact on customer and employee satisfaction. If you want to really move the needle then focus on your customers and employees.

Happy employees leads happy customers which reduces customer churn and improves business performance.
I've blatantly used AnalyticsInHR's image because it so well addresses my point. I don't know anything about their business but it appears that they know somethings about business performance.


References:

DeMarco, T. and Lister, T., Peopleware, New York: Dorset House, 1987.

Chappell, B., 4-Day Workweek Boosted Workers' Productivity By 40%, Microsoft Japan Says, NPR, Nov 4, 2019.

Monday, December 14, 2020

What Version of Java Compiled This

Java's mascot, Tux the Penguin
I try to use the latest stable version of Java/JDK when I can. There are a lot of performance improvements in newer versions of Java. Newer versions of OpenJDK also allow usage of Java Flight Recorder (that's a big deal). So when I have to work with a new Java application I often start with a new version of Java and work my way down the list until everything works. This can be a painful experience. When I have less time I'll look up what version of Java was used to compile the application then use the same version to run it. Even if you do have free time this is a good idea because Java 8 and below is probably going to have problems running on Java 9 and beyond. A lot of core language classes changed.

You might be thinking, "Why is this?" That's a valid question. It comes down to poor testing. The people testing are using the same environment as the developers and don't see a problem. The class files provided by the JDK are available to the application to use. All required classes should have been bundled with the application, but if the issue isn't caught in testing then it doesn't happen.

Anyway, how to test this: if you have a JAR or WAR file unzip it. On Linux, you can just run the unzip command against the file: unzip example.jar. On Windows, if you have a third party application you can point it at the file. If you are using Windows' built-in zip handling then rename the file to end with a .zip.

Once the files are extracted navigate to a class file made by the provider of the application; something like, com/yoakum/oneZero/security/. On Linux run javap -verbose Example.class | grep "major" on Windows run javap -verbose Example.class | findstr "major". This will return a number which you'll match to the following table:

Class Major Version | Java Major Version
                 ...|...
                 15 | 59
                 14 | 58
                 13 | 57
                 12 | 56
                 11 | 55
                 10 | 54
                  9 | 53
                  8 | 52
                  7 | 51
                 ...|...

This isn't a problem unique to Java. Every programming language I've ever worked with has had this issue. Java shouldn't have this issue as badly as it does. And it wouldn't if businesses would put more effort into ensuring their applications were more fully packaged to remove risk of missing or conflicting versions of dependencies.

Friday, November 13, 2020

The Method Won't Save You


We have all heard the preaching of "method zealots" who say, "If you just adopt my method, most of your problems will disappear." Although many methods have been the subject of such ravings, the majority during the 1970s and early 1980s contained the word "structured" in their names. Those during the late 1980s through the early 2000s contained "object" in their names. And the methods from the mid-2000s to today contain the word "agile" in their names. Although each of these waves bring great insights, as well as quality-instilling software development constructs and steps, they are not panaceas. Organizations that are really good at developing quality software are good regardless if they use a structured, object-oriented, or agile methodology. Organizations with poor records will still have poor records after adopting the latest fad method.

As a manager, beware the false soothsayers who will promise great increases in either quality or productivity based on a new method. There is nothing wrong with adopting a new method, but if the organization has had productivity or quality issues in the past, try to uncover the source of that failure before you jump to a solution. It is highly unlikely that your method is to blame!


Reference:
Loy, P., "The Method Won't Save You (But It Can Help)," ACM Software Engineering Notes, January 1993.

Monday, October 26, 2020

Use An Appropriate Process Model

Two paths go through the woods. One suitable for a car and the other for a motorcycle.

Dozens of process models are available for software projects to utilize: agile, waterfall, SAFe, throwaway prototyping, incremental development, spiral model, etc. There is no such thing as a process model that works for all projects in an organization. Every project must select a process that makes the most sense for it. The selection should be based on corporate culture, risk willingness, application area, volatility of requirements, and the extent to which requirements are well understood.

Study your project's characteristics and select a process model that makes sense. For example, when building a prototype, you should follow a process that minimizes protocol, facilitates rapid development, and does not worry about checks and balances. When building a life-critical product, the opposite is true.

References

https://www.geeksforgeeks.org/software-processes-in-software-engineering/

Alexander, L., and Davis, A., "Criteria for the Selection of a Software Process Model," IEEE COMPSAC '91, Washington, DC: IEEE Computer Society Press.

Wednesday, September 30, 2020

Community Adoption of New Practices

Community discussion at Tech Conference

Community adoption of new practices can mean a lot of things.  From getting a small team to try a new methodology or tool to convincing the world that a product or service is needed.

In 1996, Robert Metcalfe was awarded the IEEE Medal of Honor for "exemplary and sustained leadership in the development, standardization, and commercialization of Ethernet." The story of how Ethernet came to be follows the 8 steps of essential practices below. I've added headings for what departments typically handles each practice in a large company. You don't have to have different people or departments. I find it helpful to get into the mindset of each role when implementing a specific task.

  • Product Management
    • Sensing -- giving voice to a concern over a breakdown in the community
    • Envisioning -- design a compelling story about a future without the breakdown
  • Development
    • Offering -- committing to do the work to produce that feature
  • Sales
    • Adopting -- gaining commitments from early adopters to join the innovation for a trial period
    • Sustaining -- gaining commitments from majority adopters to join the innovation for an indefinite period
  • Implementation
    • Embodying -- working with the community until the new practice is fully embodied, ordinary, and transparent
    • Navigating -- moving ever closer to the goal despite surprises, contingencies, and obstacles
  • Marketing
    • Mobilizing -- building a network of supporters of the innovation from within dispersed communities
Many times over my career I've been assigned the job of getting multiple software development teams to adopt a practice; coding standards, secure coding practices, SOX requirements, architectural guidelines, etc. Eventually, the above 8 practices were performed but not always in an order that was useful. I got better with experience, but my life would have been easier if I would have had those practices written down to guide me. To make notes against. And to revisit when progress was slow or stopped. I hope you'll save the above list and use it for your future innovations.

References

Denning, P., "Avalanches Make Us All innovators," Communications of the ACM, Vol. 63, No. 9 (Sept 2020), 32 - 34.

Sorry, I don't have a reference for Metcalfe's Ethernet story. I heard him talk about it at a conference in 2015 or 2016.

Monday, September 07, 2020

Understand Risks Up Front

Army Risk Assessment Matrix

On any software project it is impossible to predict exactly what will go wrong. However, something will go wrong. In the early stages of planning, delineate the largest risks associated with your project. For each, quantify the extent of the damage if the risk potential becomes a project reality and also quantify the likelihood that this will come to pass. The product of these two numbers is your risk exposure with respect to that particular risk.

At project inception, construct a decision tree that delineates all the things you could do to lower the exposure. Then either act on the results immediately, or develop plans to implement various actions at points when the exposure exceeds your acceptable limits. (Of course, specify in advance how you will recognize this situation so that you can implement the corrective action before it is too late.)


Reference:

Charette, R., Software Engineering Risk Analysis and Management, New York: McGraw-Hill, 1989.