Thursday, April 26, 2018

High-Quality Software is Possible

It is possible to build high-quality software. (Posted by Jerry Yoakum)

"Large-scale software systems can be built with very high quality, but for a steep price tag".

Developers, to achieve high quality, follow proven techniques to increase quality. These include:
Customers (Product Managers), demand excellence but be aware of the high costs involved.

Great Designs Come From Great Designers

Invest in your best designers to get the best future designs. (Posted by Jerry Yoakum)

The difference between a poor design and a good design may be the result of a sound design method, superior training, better education, or other factors. However, a really great design is the brainchild of a really great designer. Great designs are clean, simple, elegant, fast, maintainable, and easy to implement. They are the result of inspiration and insight, not just hard work or following a step-by-step design method. Invest heavily in your best designers. They are your future.


The Design of Everyday Things

Understand the Customer's Priorities

Determine the real requirements by understanding the customer's priorities. (Posted by Jerry Yoakum)

It is quite possible that the customers would rather have 90% of the system's functionality late if they could just have 10% of it on time. This corollary of software principle, Communicate with Customers / Users, is quite a bit more shocking, but it could very well be the case. Find out!

If you are communicating with your customers, you should be sure you know their priorities. These can easily be recorded in the requirements specification (see Prioritize Requirements), but the real challenge is to understand the customers' interpretation of "essential," "desirable," and "optional." Will they really be happy with a system that satisfies none of the desirable and optional requirements?

Prioritize Requirements

Weigh each requirement and prioritize appropriately. (Posted by Jerry Yoakum)

Not all requirements are equal.

One way to prioritize requirements is to suffix every requirement in the specification with an M, D, or O to connote mandatory, desirable, and optional requirements. You may find it helpful to further rate the importance of the requirements by using a scale from 0 to 9. For example, while a M1 task is mandatory, it is not as high a priority as a M9 task.

Inspect Code

Serious inspection of your software will yield serious results. (Posted by Jerry Yoakum)

Inspection of software detailed design and code was first proposed by Michael Fagan in his paper entitled "Design and Code Inspections to Reduce Errors in Program Development". It can account for as many as 82% of all errors found in software. Inspection is much better than testing for finding errors. Define criteria for completing an inspection. Keep track of the types of errors found through inspection. Fagan's inspections consume approximately 15% of development resources with a net reduction in total development cost of 25 to 30%.

Your original project schedule should account for the time to inspect and correct every component. You might think that your project cannot tolerate such "luxuries." However, you should not consider inspection a luxury. Data has shown that you can even reduce the time to test by 50 to 90%. If that's not incentive, I don't know what could be. By the way, there is a wealth of support data and tips on how to do inspections well in Key Lessons in Achieving Widespread Inspection Use.

Keep It Simple

A simple design helps avoid deficiencies. (Posted by Jerry Yoakum)

A simple architecture or a simple algorithm goes a long way toward achieving high maintainability. Remember KISS. Also, as you decompose software into subcomponents, remember that people have difficulty comprehending more than seven (plus or minus two) things at once. C.A.R. Hoare has said:
There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies and the other is to make it so complicated that there are no obvious deficiencies.

Build Throwaway Prototypes Quickly

Throwaway prototypes are all about getting feedback fast. (Posted by Jerry Yoakum)

When building a throwaway prototype, build it as quickly as possible. Don't worry about quality, design, documentation, programming language, or maintainability. Just make it functional and into the hands of your customer fast so you can get feedback as soon as possible.

Build the Right Features into a Prototype

A prototype needs only the most important features. (Posted by Jerry Yoakum)
"When constructing a throwaway prototype, build only features that are poorly understood."
You can think of it like this - you have started development on a big project and thought you understood all the features. Unfortunately, when you start on feature X you realize that you need more feedback from the customer and you need to provide them a prototype to use to frame the problem.
  1. Make a branch from Master.
  2. Build feature X in the new branch; we'll call it branch X.
  3. Share it with your customer and gather feedback.
  4. If the customer is happy with what you did then merge to Master and stop following this list.
    More likely, you continue on along this list.
  5. Make another branch from Master; we'll call it branch XY.
  6. Build feature X in branch XY using the feedback that was gathered. Pulling in code from branch X where possible.
  7. Make sure that the customer is happy then merge to Master.
  8. Finally, throwaway that branch X prototype.
By the way, until you are finished with the project, the code you have in your Master branch is really an evolutionary prototype. You want to build the features that are best understood and merge them to Master once you have customer approval. The thing to note is to never merge a feature to Master before it is well understood and approved. Anyone could make a branch from that bad code and you risk it being merged back into Master after you correct the mistake.

Build the Right Kind of Prototype

Will it be a throwaway or an evolutionary prototype? (Posted by Jerry Yoakum)

There are 2 types of prototype:
  1. Throwaway
    • Quick and dirty. Given to the customer for feedback then thrown away once the information has been gathered.
    • Used when critical features are poorly understood.
  2. Evolutionary
    • Quality. Given to the customer for feedback then modified once the information has been gathered. This process is repeated until the product is completed.
    • Used when critical features are well understood but how to combine or present them needs customer feedback. Also used for feedback on minor features.

Communicate with Customers / Users

It is important to communicate with your customers and users. (Posted by Jerry Yoakum)
"Never lose sight of why software is being developed: to satisfy real needs, to solve real problems. The only way to solve real needs is to communicate with those who have the needs. The customer or user is the most important person involved with your project."
It may feel like it is easier to develop in the sweet silence of a vacuum but will the finished software be something that the customer likes or even finds useful? If your customer or product manager is not easily accessible then designate some people on your team to be advocates for your customer. Ask them to imagine being the customer and get their feedback. If possible, have them use the software as if they were the customer and document the good and bad points.

Productivity and Quality are Inseparable

Productivity & Quality are best friends forever. (Posted by Jerry Yoakum)

Productivity and quality have a clear relationship in software development.
  • Demand for increased productivity will decrease quality (i.e. increase the number of bugs).
  • Demand for increased quality (i.e. fewer bugs) will decrease productivity.
This is not a bad thing. Accept it and plan for it. Do not agree to deadlines that are unreasonable and will result in poor quality.

Quality is in the Eyes of the Beholder

Quality is not the same for everyone (Posted by Jerry Yoakum)

It needs to be realized that quality is not the same for all parties. A developer might think it is high performance code or an elegant design. A user might think it is a lot of features. A manager might think it is low development cost. These three examples could be described as speed, features, and cost. Optimizing one might detriment another. Because of this, a project must decide on its priorities and articulate them to all parties.

Tuesday, April 17, 2018

Build Flexibility Into Software

Building software with flexibility

A software component exhibits flexibility if it can be easily modified to perform its function (or a similar function) in a different situation. Flexible software components are more difficult to design than less flexible components. However, such components are more run-time efficient than general components and are more easily reused than less flexible components in diverse applications.

Monday, April 16, 2018

Build Generality Into Software

Words for "generality". (Posted by Jerry Yoakum)


A software component exhibits generality if it can perform its intended functions without any change in a variety of situations. General software components are more difficult to design than less general components. They also usually run slower when executing. However, such components:
  1. Are ideal in complex systems where a similar function must be performed in a variety of places.
  2. Are more potentially reusable in other systems with no modification.
  3. Reduce maintenance costs for an organization due to reduced numbers of unique or similar components. Think about the hassle of maintaining multiple different repositories and build plans.
When decomposing a system into its subcomponents, stay cognizant of the potential for generality. Obviously, when a similar function is needed in multiple places, construct just one general function rather than multiple similar functions. Also, when constructing a function needed in just one place, build in generality where it makes sense - for future enhancements.

Friday, April 06, 2018

Transition from Requirements to Design Is Not Easy

"Life is not easy for any of us. But what of that? We must have perseverance." -Marie Curie (Posted by Jerry Yoakum)


Requirements engineering culminates in a requirements specification, a detailed description of the external behavior of a system. The first step of design synthesizes an optimal software architecture. There is no reason why the transition from requirements to design should be any easier in software engineering than in any other engineering discipline. Design is hard. Converting from an external view to an internal optimal design is fundamentally a difficult problem.

Some methods claim transition is easy by suggesting that we use the "architecture" of the requirements specification as the architecture. Since design is difficult here are three possibilities:
  1. No thought went into selecting an optimal design during requirements analysis. In this case, you cannot afford to accept the requirements specification implied design as the design.
  2. Alternative designs were enumerated and analyzed and the best was selected, all during requirements analysis. Organizations cannot afford the effort to do a thorough design (typically 30 to 40 percent of total development costs) prior to baselining requirements, making a make/buy decision, and making a development cost estimate.
  3. The method assumes that some architecture is optimal for all applications. This is clearly not possible.

Thursday, April 05, 2018

Trace Design to Requirements

What do we trace for requirements traceability?   (Posted by Jerry Yoakum)

When designing software, the designer must know which requirements are being satisfied by each component. When selecting a software architecture, it is important that all requirements are "covered." After deployment, when a failure is detected, maintainers need to quickly isolate the software components most likely to contain the cause of the failure. During maintenance, when a software component is repaired, maintainers need to know what other requirements might be adversely affected.

All these needs can be satisfied by the creation of a table with rows corresponding to all completed software components and columns corresponding to every released requirement in the software requirements specification (SRS). A check in any position indicates that this design component helps to satisfy this requirement. Notice that a row void of checks indicates that a component has no purpose and a column void of checks indicates an unfulfilled requirement. Some people argue that this table is very difficult to maintain. I would argue that you need this table to design or maintain software. Without the table, you are likely to design a software component incorrectly, spending exorbitant amounts of time during maintenance. The successful creation of such a table depends on your ability to refer uniquely to every requirement.

----

STOP. Do not dismiss the above because it doesn't sound like an agile practice. There is nothing to stop you from creating, maintaining, and using the above table within the framework of scrum. This is really about design and documentation. Being able to document where work for specific requirements is to be, and was, done will drive development toward modular (in its many forms) design.

I have worked with development teams that track this.. kinda. The specification for a project is stored in JIRA with each issue representing each requirement. When an issue is marked resolved the issue is linked to the commit history, code review, and test documentation. It lacks a high-level view but a sufficiently large table would also suffer from the same difficulty. Anyway, it is immensely useful to be able to query JIRA for issues related to a specific feature and have a subset of commits to look at first.

Evaluate Alternatives

Cat choices  (Posted by Jerry Yoakum)


A critical aspect of all engineering disciplines is the elaboration of multiple approaches, trade-off analyses among them, and the eventual adoption of one. After requirements are agreed upon, you must examine a variety of architectures and algorithms. You certainly do not want to use an architecture simply because it was used in the requirements specification. After all, that architecture was selected to optimize understandability of the system's external behavior. The architecture you want is the one that optimizes conformance with requirements.

For example, architectures are generally selected to optimize throughput, response time, modifiability, portability, interoperability, safety, or availability, while also satisfying the functional requirements. The best way to do this is to enumerate a variety of software architectures, analyze (or simulate) each with respect to the goals, and select the best alternative. Some design methods result in specific architectures; so one way to generate a variety of architectures is to use a variety of methods.

Wednesday, April 04, 2018

Performance Analysis: The USE Method

For every resource, check utilization, saturation, and errors.  (Posted by Jerry Yoakum)


Blatant rip off of http://dtrace.org/blogs/brendan/2012/02/29/the-use-method/ with a small amount of simplification.


The USE method can be summarized as: For every resource, check utilization, saturation, and errors. While the USE method was first introduced to me as a method for examining hardware some software resources can be examined with this methodology.
Utilization is the percentage of time that the resource is busy working during a specific time interval. While busy, the resource may still be able to accept more work; the degree to which it cannot do so is identified by saturation. That extra work is usually waiting in a queue.

Saturation happens when a resource is fully utilized and work is queued. When a resource is fully saturated then errors will occur.

Errors in terms of the USE method refer to the count of error events. Errors can degrade performance and might not be immediately noticed when the failure mode is recoverable. This includes operations that fail and are retried, as well as resources that fail in a pool of redundant resources.
The key metrics of the USE method are ususally expressed as:
  • Utilization as a percentage over a time interval.
  • Saturation as a wait queue length.
  • Errors as the number of errors reported.
It is also important to express the time interval for the measurement. A short burst of high utilization can cause saturation and performance issues, even though the overall utilization is low over a longer interval.

The first step in the USE method is to create a list of resources. Try to be as complete as possible. Here is a generic list of hardware resources:
  • CPUs - Sockets, cores, hardware threads (virtual CPUs).
  • Main memory - RAM.
  • Network interfaces - Ethernet ports.
  • Storage devices - Disks.
  • Controllers - Storage, network.
  • Interconnects - CPU, memory, I/O.
If focusing on software you should start out breaking your system down by services then methods then low level resources, for example:
  • Mutex locks - Utilization may be defined as the time the lock was held, saturation by those threads queued waiting on the lock.
  • Thread pools - Utilization may be defined as the time threads were busy processing work, saturation by the number of requests waiting to be serviced by the thread pool.
  • Process/thread capacity - The current thread/process usage vs the maximum thread/process limit of a system may be defined as utilization; waiting on allocation may indicate saturation; and errors occur when the allocation fails.
  • File descriptor capacity - Same as above but for file descriptors.
Drawing a function block diagram for the system will be very helpful when looking for bottlenecks in the flow of the data. While determining utilization for the various components, annotate each one on the functional diagram with its maximum bandwidth. The resulting diagram may pinpoint systemic bottlenecks before measurement has been taken. (This is a useful exercise during product design, while you have time to change specifications.)

Here are some general suggestions for interpreting metric types:
  • Utilization - 100% utilization is usually a sign of a bottleneck (check saturation and its effect to the confirm). High utilization (i.e. >60%) can begin to be a problem. When utilization is measured over a relatively long time period, an average utilization of 60% can hide short bursts of 100% utilization.
  • Saturation - Any amount of saturation can be a problem. This may be measured as the length of a wait queue or time spent waiting on the queue.
  • Errors - Non-zero error counters are worth investigating, especially if they are still increasing while performance is poor.

Design Without Documentation Is Not Design

Cart before the horse.  (Posted by Jerry Yoakum)

Sometimes you'll hear a software engineer say, "I have finished the design. All that's left is its documentation." This makes no sense. Can you imagine a building architect saying, "I have completed the design of your new home. All that's left is to draw a picture of it," or a novelist saying, "I have completed the novel. All that's left is to write it"? Design is the selection, abstraction, and recording of an appropriate architecture and algorithm onto paper or other medium.