I don't really blog anymore. Click here to go to my main website.

muhuk's blog

Nature, to Be Commanded, Must Be Obeyed

January 26, 2015

When Decent Programmers Fail

Do you know the feeling when you put adequately experienced/skilled programmers, armed with the right tools, in just the right kind of environment and they still fail to produce high quality software. When software projects fail the bad management™ often takes most of the blame. To be honest I can’t recall an instance where bad management wasn’t cited by programmers as the only cause. This post is not about project management. I will just accept that bad management is a major factor in failure of software projects.

Deadlines Are Not Metrics

Failure is commonly defined as the inability to finish a project on time. And on bugdet by extension. Perhaps this is because bugs, lack of tests or inflexible design are considered to be internal details of the project and not of interest for the big picture. I would understand if a customer or manager had this mindset. But strangely programmers has this “it took too long” mindset as well. And guess what; we missed the deadlines because of management!

I will use a different definition of failure; “didn’t deliver high quality code”. Note that it has the notion of time in it, high quality software wasn’t delivered. But the emphasis is on quality. From wikipedia:

Quality in business, engineering and manufacturing has a pragmatic interpretation as the non-inferiority or superiority of something; it is also defined as fitness for purpose.

We don’t want our accounting software to be able to manage our music collection or our internet browser to be able to visit websites when we are busy with something else. We actually want it to not do such things. We want our software to function as we have intended. That is high quality software. Often a piece of code needs to be worked on for a long time. New features must be added, existing features must be improved. So the quality of the software is not just how well it fits its use cases but also how easily it can be adapted to new requirements.

Five Circles of Software Development Hell

  • Code doesn’t work at all. It has fatal errors and/or missing features. This wouldn’t even be called a deliverable. It probably wasn’t delivered anyway.

    Sometimes the problem we are tackling turns out to be more complex than we had anticipated. We must often start projects with incomplete information, we must make some assumptions about the scope, resources needed and also about how we should design the system. In the end our code might not work and it’s not likely to get any better if we keep pounding on our keyboards. We are at a dead end.

    There are three key practises to avoid this hell; prototyping, design and requirements gathering. Creating prototypes for the most complicated parts of our project might help us realize earlier our current approach is not going to work. Stepping away from our IDE and thinking deep and hard about the problem at hand could help us discover better ways to attack it. And perhaps we have failed because we have missed that vital bit of information our customer would have given us, had we been more thorough with our requirements gathering process.[1]

  • Code doesn’t work as intended. It has bugs, unpredictable performance etc.

    We are assuming decent programmers. Ones with enough skill and experience. But they are still human. Sometimes we get ahead of ourselves and cannot keep calm and objective. “We don’t need tests for such a simple thing.” “This optimization is needed, we don’t need to measure what we already know.” “It’s a mess but we can’t waste time refactoring.” etc.

    A million things can be said about technical debt and how to avoid it. But it all comes down to discipline. It’s not that a decent programmer is unaware of best practices. He is producing low quality software because he can’t restrain himself.

  • Code is hard to read. When we involve other programmers they prefer self-immolation rather than reading the code.

    Well tuned code might not always be well written. In some cases if we improve readability we might sacrifice some runtime performance. Ideally we should optimize for maximum performance and maximum readability. But unless we are retiring the code, readability should come before performance. Performance can be improved later, but unreadable code is inaccessible. It’s unlikely anyone will bother to touch it.

    Readability is often underrated and regarded as superfluous decoration. This is similar to how IT is treated as an inessential department but suddenly becomes critical when some system breaks down. While new features are added everyday, readability is happily sacrificed for speed. When it is found that new hires are having great difficulty getting productive, readability becomes a priority. But then it would already be too late.

  • Code is hard to reason. Those brave souls who managed to read the code don’t have much of an idea how it works. Including the original author.

    Being deep enough in the previous two hells win us a free compulsory membership to this magical place. It’s magical because we don’t know why the code did work or did not work and sometimes whether or not it worked.

    To avoid this hell, in addition to previous suggestions, modular design must be practised. If the code is comprised of sufficiently small, conceptually independent parts it would be easier to replace parts that are hard to reason. If the overall design is monolithic, untangling and fixing its parts would be very difficult. And by very difficult I mean it’s not going to happen unless jobs are on the line.

  • Code is hard to change. An innocent little change here causes major breakdowns in other, seemingly unrelated parts of the code.

    This is the final destination for low quality code. It doesn’t have any value beyond what it provides in production.

    Coding is different than other forms of writing in that it gets executed by a computer to achieve side-effects. But it is also never really done. Even if it does exactly what we want now and in the future, we will have to adapt it when our operating system or hardware becomes obsolete.

This list is in no particular order. There is a lot of overlap for these issues. In fact it is not likely to encounter them alone. If it seems that way, it’s probably because other issues are not surfaced yet.

Good Programmers vs Decent Programmers

“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”

—Bill Gates

Good programmers write high quality software. They are not necessarily all geniuses. But they pay attention to the task at hand and at the same time the design of the package they are working in and the package on top of it and the package that uses that… they write code with the entire architecture in mind. Decent programmers are not bad programmers. If they are working with a well maintained codebase they can at least write code of matching quality. Good programmers are good not because they produce 10 times the code. Perhaps they are twice as productive, perhaps a little more. But it doesn’t matter how much code is produced if you are going to end up in software development hell.

[1]I refrained writing “fail fast” here. I think this catch phrase is really easy to misinterpret as “failure is the goal”. So I try to avoid it. But yes, if failure is inevitable, it’s best to know about it as soon as possible.

If you have any questions, suggestions or corrections feel free to drop me a line.