Tuesday, February 02, 2021

Manage by Variance

 

Variance: the difference between our results and what was expected.

It is difficult, if not impossible, to manage a project without a detailed plan. Once you have a plan, update it as necessary. Now that you have an up-to-date plan, your responsibility is to manage the project according to that plan. As you report your progress, report only the discrepancies between the plan and the actuals. While the project is underway, a progress report should be, "Everything is as stated in the plan except ...." This way attention and resources can be applied to the problem areas.

Monday, January 11, 2021

Miraculous Productivity Increases

The technology industry is saturated with people who preach that this tool or that technique will reduce development cost. We all hear at business meetings and conferences about software managers claiming huge increases of 50 to 100 percent in productivity by applying tool x or language y or method z. Don't believe it! It is hype. Moderate increases in productivity of 3 to 5 percent are much more reasonable targets.

You should be happy with tools, languages, and methods that shave a few percentages off your cost or add a few percentages to your quality. However, cost reduction makes no sense without an understanding of its impact on customer and employee satisfaction. If you want to really move the needle then focus on your customers and employees.

Happy employees leads happy customers which reduces customer churn and improves business performance.
I've blatantly used AnalyticsInHR's image because it so well addresses my point. I don't know anything about their business but it appears that they know somethings about business performance.


References:

DeMarco, T. and Lister, T., Peopleware, New York: Dorset House, 1987.

Chappell, B., 4-Day Workweek Boosted Workers' Productivity By 40%, Microsoft Japan Says, NPR, Nov 4, 2019.

Monday, December 14, 2020

What Version of Java Compiled This

Java's mascot, Tux the Penguin
I try to use the latest stable version of Java/JDK when I can. There are a lot of performance improvements in newer versions of Java. Newer versions of OpenJDK also allow usage of Java Flight Recorder (that's a big deal). So when I have to work with a new Java application I often start with a new version of Java and work my way down the list until everything works. This can be a painful experience. When I have less time I'll look up what version of Java was used to compile the application then use the same version to run it. Even if you do have free time this is a good idea because Java 8 and below is probably going to have problems running on Java 9 and beyond. A lot of core language classes changed.

You might be thinking, "Why is this?" That's a valid question. It comes down to poor testing. The people testing are using the same environment as the developers and don't see a problem. The class files provided by the JDK are available to the application to use. All required classes should have been bundled with the application, but if the issue isn't caught in testing then it doesn't happen.

Anyway, how to test this: if you have a JAR or WAR file unzip it. On Linux, you can just run the unzip command against the file: unzip example.jar. On Windows, if you have a third party application you can point it at the file. If you are using Windows' built-in zip handling then rename the file to end with a .zip.

Once the files are extracted navigate to a class file made by the provider of the application; something like, com/yoakum/oneZero/security/. On Linux run javap -verbose Example.class | grep "major" on Windows run javap -verbose Example.class | findstr "major". This will return a number which you'll match to the following table:

Class Major Version | Java Major Version
                 ...|...
                 15 | 59
                 14 | 58
                 13 | 57
                 12 | 56
                 11 | 55
                 10 | 54
                  9 | 53
                  8 | 52
                  7 | 51
                 ...|...

This isn't a problem unique to Java. Every programming language I've ever worked with has had this issue. Java shouldn't have this issue as badly as it does. And it wouldn't if businesses would put more effort into ensuring their applications were more fully packaged to remove risk of missing or conflicting versions of dependencies.

Friday, November 13, 2020

The Method Won't Save You


We have all heard the preaching of "method zealots" who say, "If you just adopt my method, most of your problems will disappear." Although many methods have been the subject of such ravings, the majority during the 1970s and early 1980s contained the word "structured" in their names. Those during the late 1980s through the early 2000s contained "object" in their names. And the methods from the mid-2000s to today contain the word "agile" in their names. Although each of these waves bring great insights, as well as quality-instilling software development constructs and steps, they are not panaceas. Organizations that are really good at developing quality software are good regardless if they use a structured, object-oriented, or agile methodology. Organizations with poor records will still have poor records after adopting the latest fad method.

As a manager, beware the false soothsayers who will promise great increases in either quality or productivity based on a new method. There is nothing wrong with adopting a new method, but if the organization has had productivity or quality issues in the past, try to uncover the source of that failure before you jump to a solution. It is highly unlikely that your method is to blame!


Reference:
Loy, P., "The Method Won't Save You (But It Can Help)," ACM Software Engineering Notes, January 1993.

Monday, October 26, 2020

Use An Appropriate Process Model

Two paths go through the woods. One suitable for a car and the other for a motorcycle.

Dozens of process models are available for software projects to utilize: agile, waterfall, SAFe, throwaway prototyping, incremental development, spiral model, etc. There is no such thing as a process model that works for all projects in an organization. Every project must select a process that makes the most sense for it. The selection should be based on corporate culture, risk willingness, application area, volatility of requirements, and the extent to which requirements are well understood.

Study your project's characteristics and select a process model that makes sense. For example, when building a prototype, you should follow a process that minimizes protocol, facilitates rapid development, and does not worry about checks and balances. When building a life-critical product, the opposite is true.

References

https://www.geeksforgeeks.org/software-processes-in-software-engineering/

Alexander, L., and Davis, A., "Criteria for the Selection of a Software Process Model," IEEE COMPSAC '91, Washington, DC: IEEE Computer Society Press.

Wednesday, September 30, 2020

Community Adoption of New Practices

Community discussion at Tech Conference

Community adoption of new practices can mean a lot of things.  From getting a small team to try a new methodology or tool to convincing the world that a product or service is needed.

In 1996, Robert Metcalfe was awarded the IEEE Medal of Honor for "exemplary and sustained leadership in the development, standardization, and commercialization of Ethernet." The story of how Ethernet came to be follows the 8 steps of essential practices below. I've added headings for what departments typically handles each practice in a large company. You don't have to have different people or departments. I find it helpful to get into the mindset of each role when implementing a specific task.

  • Product Management
    • Sensing -- giving voice to a concern over a breakdown in the community
    • Envisioning -- design a compelling story about a future without the breakdown
  • Development
    • Offering -- committing to do the work to produce that feature
  • Sales
    • Adopting -- gaining commitments from early adopters to join the innovation for a trial period
    • Sustaining -- gaining commitments from majority adopters to join the innovation for an indefinite period
  • Implementation
    • Embodying -- working with the community until the new practice is fully embodied, ordinary, and transparent
    • Navigating -- moving ever closer to the goal despite surprises, contingencies, and obstacles
  • Marketing
    • Mobilizing -- building a network of supporters of the innovation from within dispersed communities
Many times over my career I've been assigned the job of getting multiple software development teams to adopt a practice; coding standards, secure coding practices, SOX requirements, architectural guidelines, etc. Eventually, the above 8 practices were performed but not always in an order that was useful. I got better with experience, but my life would have been easier if I would have had those practices written down to guide me. To make notes against. And to revisit when progress was slow or stopped. I hope you'll save the above list and use it for your future innovations.

References

Denning, P., "Avalanches Make Us All innovators," Communications of the ACM, Vol. 63, No. 9 (Sept 2020), 32 - 34.

Sorry, I don't have a reference for Metcalfe's Ethernet story. I heard him talk about it at a conference in 2015 or 2016.

Monday, September 14, 2020

Misbehaving

Book cover of Misbehaving: The Making of Behavioral Economics by Richard Thaler

I've been listening to Misbehaving: The Making of Behavioral Economics by Richard Thaler on audiobook. In chapter 3, "The List," there is an interesting analysis of a company losing money because managers are afraid to lose their jobs. Which leads them to avoid some projects. I saw this only a little in my work experience.

In chapter 29, "Football," there is an analysis of how to game the draft to end up with an overall better team. Basically, it is chapter 3 but told in a more interesting manner. Replace the star football player with the star project and you get the same problem of putting all your resources in one basket. The analysis goes deeper and points out that the coaches and managers are not solely to blame. They are going after the star players (star projects) because these are what the owner (CEO/chairmen) want. I saw this more often in my work experience.

The higher up in a company the less the employee looked at the project data and the more they looked at what their boss wanted. If you are a manager, director, VP, C-suite employee, or chairman then please read this book. It won't give you answers but it might force you to acknowledge that you are making decisions based on the wrong data. Then you can give the company a better chance at [overall] success.

Okay, I know, I'm skipping over the problem of "the manager who doesn't work on the boss' pet project still gets fired." Yeah, that sucks. You deserve a better boss. But guess what? You saved your department for another year and only had to sacrifice yourself. Had you done the doomed project you might have doomed the department too.

Monday, September 07, 2020

Understand Risks Up Front

Army Risk Assessment Matrix

On any software project it is impossible to predict exactly what will go wrong. However, something will go wrong. In the early stages of planning, delineate the largest risks associated with your project. For each, quantify the extent of the damage if the risk potential becomes a project reality and also quantify the likelihood that this will come to pass. The product of these two numbers is your risk exposure with respect to that particular risk.

At project inception, construct a decision tree that delineates all the things you could do to lower the exposure. Then either act on the results immediately, or develop plans to implement various actions at points when the exposure exceeds your acceptable limits. (Of course, specify in advance how you will recognize this situation so that you can implement the corrective action before it is too late.)


Reference:

Charette, R., Software Engineering Risk Analysis and Management, New York: McGraw-Hill, 1989.

Wednesday, July 29, 2020

Fix Requirements Specification Errors Now

Errors in requirements result in larger costs the longer they go without being fixed. (Posted by Jerry Yoakum)

Errors in the requirements specification will cost you:
  • 5 times more to find and fix if they remain until design.
  • 10 times more if they remain until coding.
  • 20 times more if they remain until unit testing.
  • 200 times more if they remain until delivery.
That is more than convincing evidence to fix them during the requirements phase!

Start using your software architects and key software engineers to review software requirements before they go to development. Don't do this as a waterfall process where the SRS goes from Product to Architecture to Development. Make it an agile process where people get involved before the SRS is "done." It will make changes easier and less painful.

PROTIP: Put the SRS in a version-control system such as GIT with each section a separate file. This way anyone can make changes to the SRS that can be reviewed, approved, and tracked. Add in a script to combine the sections into a single file for easy handling. Everyone knows that this can be done but I have yet to meet a single product management team that does it.


Reference:
Boehm, B., "Software Engineering," IEEE Transactions on Computers, December 1976.

Monday, July 27, 2020

Top 10 Project Management Risks


As a software architect, I've assisted in project management and filled the role of product owner. For all of these roles it is important to be familiar with the situations that most often cause software disasters. These are your most likely risks, but not all of them:

If you don't already have one, this list is a good starting point for a project planning checklist. Additionally, you should add risks unique to your environment, industry, and project then develop plans on how to mitigate them.


Reference:
Boehm, B., "Software Risks Management: Principles and Practices," IEEE Software, January 1991.