Spaceman: An Astronaut's Unlikely Journey to Unlock the Secrets of the Universe by Mike Massimino
My rating: 5 of 5 stars
Wow! Who hasn't dreamed of spaceflight?! I loved the humanity that Massimino brings to the subject. This is a story that is told from his perspective but goes through many subjects that many people either relate to or want to know more about - childhood dreams, college, work, NASA, flight, spaceflight, parenthood, etc. It is wonderful to see those things through Massimino's eyes. Space exploration is awesome, and it needs people like Massimino to share the story; to bring us all along.
View all my reviews
Jerry Yoakum's thoughts on software engineering and architecture from experience working with code, computer science, python, java, APIs, NASA, data mining, math, etc.
Sunday, October 06, 2019
Wednesday, October 02, 2019
Starman Jones
Starman Jones by Robert A. Heinlein
My rating: 5 of 5 stars
I loved the concern that Max had for his library book. That really hooked me into the book. After that it was a fast, fun ride.
View all my reviews
My rating: 5 of 5 stars
I loved the concern that Max had for his library book. That really hooked me into the book. After that it was a fast, fun ride.
View all my reviews
Monday, September 30, 2019
SuperFreakonomics
SuperFreakonomics: Global Cooling, Patriotic Prostitutes And Why Suicide Bombers Should Buy Life Insurance by Steven D. Levitt
My rating: 5 of 5 stars
Second time reading and still worth the read. It can be a little disheartening to know that some (many / maybe most) of the things discussed didn't make it into mainstream in the past 10 years. That is a lesson in itself and the concept is supported by a few of the stories in SuperFreakonomics.
View all my reviews
My rating: 5 of 5 stars
Second time reading and still worth the read. It can be a little disheartening to know that some (many / maybe most) of the things discussed didn't make it into mainstream in the past 10 years. That is a lesson in itself and the concept is supported by a few of the stories in SuperFreakonomics.
View all my reviews
Friday, September 27, 2019
Achieve Effective Test Coverage
In spite of the fact that testing cannot prove correctness, it is still important to do a through job testing. Metrics exist to determine how thoroughly the code was exercised during test plan generation or test execution. These metrics are easy to use, and tools exist to monitor the coverage level of tests. Some examples include:
- Statement coverage, which measures the percentage of statements that have been executed at least once.
- Branch coverage, which measures the percentages of branches in a program that have been executed.
- Path coverage, which measures how well the possible paths have been exercised.
Just remember that, although "effective" coverage is better than no coverage at all, do not fool yourself into thinking that the program is "correct" by any definition (see Testing Exposes Presence of Flaws).
Reference:
Weiser, M., Gannon, J., and McMullin, P., "Comparison of Structural Test Coverage Metrics," IEEE Software, March 1985.
Labels:
software-engineering,
testing
Location:
Springfield, MO, USA
Thursday, September 26, 2019
Use Effective Test Completion Measures
Many projects proclaim the end of testing when they run out of time. This may make political sense, but it is irresponsible. During test planning, define a measure that can be used to determine when testing should be completed. If you have not met your goal when time runs out, you can still make the choice of whether to ship the product or slip the milestone, but at least you know whether you are shipping a quality product.
Two ideas for this effective measurement of test progress are:
- Rate of new error detections per week.
- After covertly seeding the software with known bugs (called bebugging), the percentage of these seeded bugs thus far found.
An ineffective measure of test progress is the percentage of test cases correctly passed (unless, of course, you know that the test cases superbly cover the requirements).
Reference:
Dunn, R., Software Defect Removal, New York: McGraw-Hill, 1984.
Labels:
software-engineering,
testing
Location:
Springfield, MO, USA
Wednesday, September 25, 2019
The Dictator's Handbook
The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics by Bruce Bueno de Mesquita
My rating: 5 of 5 stars
I was expecting stories about evil dictators, not a practical guide to politics. The real-world feel of this book made me continuously think of the book Physics for Future Presidents.
View all my reviews
My rating: 5 of 5 stars
I was expecting stories about evil dictators, not a practical guide to politics. The real-world feel of this book made me continuously think of the book Physics for Future Presidents.
View all my reviews
Monday, September 23, 2019
Use McCabe Complexity Measure
Although many metrics are available to report the inherent complexity of software, none is as intuitive and as easy-to-use as Tom McCabe's cyclomatic number measure of testing complexity. While it is not absolutely foolproof, it results in fairly consistent predictions of testing difficulty. Simply draw a graph of your program, in which nodes correspond to sequences of instructions and arcs correspond to non-sequential flow of control. McCabe's metric is simply e-n+2p where e is the number of arcs, n is the number of nodes, and p is the number of independent graphs you are examining. This is so simple that there is really no excuse not to use it.
Use McCabe on each module to help assess unit testing complexity. Also, use it at the integration testing level where each procedure is a node and each invocation path is an arc to help assess integration testing complexity.
Reference:
McCabe, T., "A Complexity Measure," IEEE Transactions on Software Engineering, Dec 1976.
Labels:
software-engineering,
testing
Location:
Springfield, MO, USA
Friday, September 20, 2019
The Big Bang Theory Does Not Apply
As a project nears its delivery deadline and the software is not ready, desperation often takes over. Suppose the schedule called for two months of unit testing, two months of integration testing, and two months of software system testing. It is now one month from the scheduled delivery. Suppose 50% of the components have been unit tested. A back-of-the-envelope calculation indicates that you are five months behind schedule. You have two choices:
- Admit the five-month delay to your customer: Ask for a postponement.
- Put all the components together (including the 50% not yet unit tested) and hope for the best.
In the first case, you are admitting defeat, perhaps prematurely. In the eyes of your managers, you might be giving up before you've done everything in your power to overcome the problem. In the second case, there might be a chance that, when you put it all together, it will work and you'll be back on schedule. Project managers often succumb to the latter because it looks like they are trying everything before admitting defeat. Unfortunately, this will probably add six more months to your schedule since you'll be trying to retrofit quality. You cannot save time by omitting unit and integration testing.
Reference:
Weinberg, G., Quality Software Management, Volume 1: Systems Thinking, New York: Dorset House, 1992.
Labels:
software-engineering,
testing
Location:
Springfield, MO, USA
Thursday, September 19, 2019
Always Stress Test
Software design often behaves just fine when confronted with "normal" loads of inputs of stimuli. The true test of software is whether it can stay operational when faced with severe loads. These severe loads are often stated in the requirements as "maximum of x simultaneous widgets" or "maximum of x new widget arrivals per hour."
If the requirements state that the software shall handle up to x widgets per hour, you must verify that the software can do this. In fact, not only should you test that it handles x widgets, you should also subject the software to x+1 or x+2 (or more) widgets to see what happens and determine when environment violates "acceptable" behavior. After all, the system may not be able to control its environment, and you do not want the software to crash when the environment "misbehaves" in an unexpected manner.
Reference:
Myers, G., The Art of Software Testing, New York: John Wiley & Sons, 1979.
Labels:
software-engineering,
testing
Location:
Springfield, MO, USA
Wednesday, September 18, 2019
Test Invalid Input
It is natural and common to produce test cases for as many acceptable input scenarios as possible. What is equally important -- but also uncommon -- is to produce an extensive set of test cases for all invalid or unexpected input.
For a simple example, let us say we are writing a program to sort lists of integers in the range of 0 to 100. Test lists should include some negative numbers, some nonintegral numbers, some alphabetic data, some null entries, and so on.
Reference:
Myers, G., The Art of Software Testing, New York: John Wiley & Sons, 1979.
Labels:
software-engineering,
testing
Location:
Springfield, MO, USA
Subscribe to:
Posts (Atom)