94% Of Projects Fail, Or Maybe 59%

Search on Google for 94% of projects fail. You'll find something like 26 million results. I can also guarantee that 77% of you won't try. Or maybe it's 35%; I forget which.

The point is that we are constantly badgered by precise percentages that 61% of you believe are actually meaningful. I actually considered making that number generated by javascript because it would be 25% more accurate.

Given the current nightmare that is healthcare.gov at least 85% of articles emphasizing how much better the author could do is quoting a number between 59% and 94% rate at which projects fail if they don't follow the author's ideas. In reality I don't believe 49% of them.

Enough with the ridiculous percentages already some percentage of you are likely to be thinking. Exactly my point.

If one egg is cracked and moldy in a box of 12 I can safely calculate what percentage is bad (8.33333333% approximately). But trying to put some precise percentage on project failures is 100% silly which is easy to calculate as well.

At work some months ago there was a seminar offered called "Why Projects Fail". I didn't attend to keep from getting aggravated. Or maybe missing lunch.

The first problem is what is a project? When does it start, when or if does it end? Do you count the first day someone imagines doing something, or the first day of coding, or the first meeting? What is the end, version 1.0 in production, last day of coding, first article about it mentioning 94% likelihood of failure? What if the project continues for decades (our parent company has a huge system designed in the early 60's) or what if the project is cancelled after two weeks?

Then what is success, or better failure? It goes into production? What if no customers use it? What if no one even knows it exists? What if the code ships and has zero bugs but it so poorly designed no customer can figure it out? Is that a success (on time no bugs) or failure (no one can operate it)? What if everything is perfect, on time, no bugs, customers paying massive amounts for it, and then misuse kills people? Success or failure? Or both?

The problem with putting silly precise numbers on these things is that people are more likely to believe them, leading to all sorts of stupid advice being backed by these numbers, often with no reference and likely misquoting the original to boot.

After reading my favorite book The Leprechauns of Software Engineering you will start to wonder about a lot of things we assume must be right. The book goes backwards in time to find the root of many things we believe (and has a chapter on this exact numerical topic).

Healthcare.gov clearly has a lot of problems which may or may not get fixed. If they do will it still be a failure? Or are failure and success fuzzy concepts, along with the indistinct project, that really don't lend themselves to precise answers as to why things turned out the way they did? "Why Projects Fail" may as well have been titled "Outcomes of Indistinct Fuzz" but then who would have come?

Developing software is complex enough that trying to reduce it to simplistic numbers doesn't really help anyone. I've seen people argue along with the 94% stat that healthcare.gov failed because it wasn't Agile and others that it wasn't Waterfall or it had insufficient requirements or too many. People have even resorted to saying specific technologies would have made it work perfectly and that projects fail (94% in fact) because they used other technologies. The law of random numbers means you can use numbers to produce any random argument. But people who quote numbers are more likely to be read so any number will do.

The real reason projects fail is never some simple thing any more than you can easily define what a project is or what an outcome is or why it happened. Software is one of the most complex things people do, and in fact anything involving people doing stuff is going to be complicated. Complex activities are by nature going to very difficult to measure. You can figure out the big-O notation of a sort algorithm but try doing it on an entire application which is a combination of thousands of little algorithmic bits.

It's OK and even useful to discuss how to make things come out better but you have to be cognizant that you cannot know what will happen or even what did happen with a perfect understanding sufficient to back precise calculations. Using bogus statistics to back your argument really does no one any good but people do it anyway because it makes them seem smart and thorough. Baseball is a great sport to calculate stuff on, everyone knows that a .300 hitter fails 14 times out of 20 and a .250 hitter fails 15 times out of 20. But when you look at who won the World Series, the numbers rarely tell the whole story. If  in this world of easy math you can't easily predict the outcome based on the numbers how much worse is it in software where nothing is easy to calculate?

Despite 77% of people saying they want a clever ending to a blog post I would rather please the 33% that don't care.

Wait you say, the math doesn't add up! It sure doesn't.