The Unreasonable Ineffectiveness of Estimates
In my long career, I've dealt with many different kinds of estimating, from the early days in the 1980s when there was no estimating because no one had any idea how to do it to my last job where estimation was always demanded but never actually relevant.
In the following posts on estimation, I will highlight two projects (both with 8-figure budgets), one that eventually shipped and one that was canceled at the last minute—from my previous job. The details will remain anonymous given my former employer, although something else has replaced both projects.
Estimation is generally being asked to provide a possible amount of time and resources to complete some small or large task. For example, small jobs might be to make a minor change to existing software; large tasks might encompass projects that will consume months or even years or utilize hundreds of people. Generally, those who want the job completed, are providing funds for it, or have some business need to fulfill, will expect an idea of when it might be completed or how much it will cost. The problem, of course, is in the details provided.
Software that needs to be built or modified comes in many different forms. For example, you might be building software under contract from a customer, building software products for your employer to sell, building software for your employer to support their real business (my stories cover this), building software for your employer that is the business (possibly supported by ads or similar), or creating software for internal use only. Each of these will require estimation for different reasons.
In my experience, these situations must exist for any estimation to be meaningful. First, the timeframe for the estimation is also an additional consideration—two-week sprints are much easier than a 16-month project.
While the following are essential, I should add that I have never seen all of these at once!
(1) The team has to know the subject area well. The more experience you have building software, knowing about the business area, or understanding the technology involved, the easier it is to base an estimate on what happened before. For example, if you are building a General Ledger app for the 8th time, you probably know what to expect. In addition, experienced programming teams are more likely to realize what can go wrong and how to approach the project, along with understanding what your company does, than contractors recently hired.
(2) The requirements are clear and relatively complete, and whoever is presenting them is available and able to explain them. No matter how they are written, these form the basis of what you are expected to do and thus should let you see enough to build a decent estimate.
(3) Only minimum changes will be expected during the estimated project timeframe. This would seem to logically follow (2), but anything you don't know, you can't estimate without fudge factors or guesswork. If you expect that there will be many changes, then you will have to add or multiply time, implying your estimate is not very meaningful.
(4) The project timeline should be based on estimates instead of requiring the estimates to fit a pre-determined timeline. It's incredible how often that happens. Of course, in Scrum, the sprint is a fixed timeline, but that's OK if you decently pick things to work on that can be completed in the sprint. I generally have seen this issue in project-long estimates. Still, sometimes management decides you need to move faster and forces you to cram more things in, making any estimation at the beginning difficult or pointless.
(5) Another complicating factor for estimates is accounting for dependencies, other teams building parts you need, services, or even hardware. How long it will take to build your piece may be difficult to estimate since it depends on teams that may have poorer estimates or depend on yet more teams. So the more your estimate depends only on what you control, the more likely the estimate will be meaningful.
All of this seems obvious, yet it rarely happens so easily. The stories I will tell in the following posts show a lack of all of them.
The usual story regarding the need for estimations is that management needs to know if the project is worth doing, is affordable, or can be completed in an acceptable timeline. Unfortunately, in my experience, sometimes they want it no matter what you say, even changing your estimate to make it more palatable. But, of course, you have to try then to do the impossible! I have also seen where estimates were used to purposefully kill projects, both by programming teams and because of office politics.
In my first job in the 80s at a Defense Company, few did estimates as projects took a long time, and few had any experience building software, so estimates were not critical. At my first little software startup in 1985, we were making a product to sell based on my idea (Trapeze), and none of us had ever built a Mac app before. The Mac industry was so new we could not know how long it would take.
For my second company starting in 1988, we worked on Persuasion for its author (later published by Aldus) and then built Deltagraph (for the publisher) for multiple versions over five years. The development cycle in those days was at least 6-8 months, starting with a desired set of features to be built and adjusting as time passed. Most estimates were comparative, i.e., we have X time left, which of the Y features might be easiest to complete, and push everything else into the next release cycle. This type of estimation is somewhat more straightforward; we were fully aware of what we were building and what was on the list, and it was easy to adjust. No one demanded we estimate each feature individually.
Working for a consulting firm, I did a project for a travel company where the requirements were basically "fix our refund process." I was expected to estimate how long it would take (which was a guess as it was not simply implementing something decided on but fixing a broken business process and providing a technical solution). I gave them an amount I thought reasonable. Mid-way through implementation, the business suddenly made a major request which would require me to change at least a week's worth of existing effort. I informed them of this, but they refused to pay for it, arguing about it for the next month, and finally, my employer told me to stop working when the original time ran out. Sadly their existing programmers had to finish the work, and I was not allowed to speak to them at all, so I never found out if they finished it. Estimates based on vague and minimal ideas are not worth much; customers complaining about missed estimates in such a case are sadly not uncommon.
In my last job, services were often many layers deep, each a different team, often far removed from what we were tasked with. Sometimes teams we needed something from might even have been third-party vendors who usually could care less. In one case, such a vendor was tasked with adding something to their extensive system that was not their normal business (chosen for God knows what reason) and only communicated via a single WSDL controlling an enormous XML document, most of which was unnecessary for our use case. We had to build an intermediate server to convert this XML to JSON for our service layer to consume. Their management and development teams were on separate sides of the Earth, making communication difficult. In addition, deployments of their system often caused multiple days where it didn't work. Trying to estimate anything here was pointless, as their contribution was basically random time.
During both projects, executives demanded constant reports on when work would be completed, often in the form of a burndown chart showing when the end of all sprints would be reached (yes, we were required to enter how many Story points each sprint would take, at the start of the project). Continuous changes (some formal, some simply demands) made the charts look like sawtooth waves. Since the end date changed continuously, many execs complained we were not getting enough completed while simultaneously adding changes. One PM I knew said he had built several dozen custom reports using the data in our "Agile" tracking system despite knowing they were all rather pointless. Estimating an entire project upfront on a sprint-by-sprint basis, knowing constant (often daily) changes are coming, is about as pointless as it gets.
Eventually, one project shipped, and the other was completed but canceled, despite all of this. Were they late? Yes. No. How would you even determine it? My feeling over my entire career was that things take as long as needed, irrespective of what you initially thought.
After the two projects I will describe, our division tried something different to please executives who wanted to know when everything would be complete. They insisted all teams use a spreadsheet-style application to enter completion dates for major parts of their part of the project. The problem was that there was no way to model dependencies—each team was expected to add its own dates. Naturally, it made no sense despite executives looking at the max date as when the project would be complete, which rapidly became further and further out. While some pieces could be done before dependencies were complete, no QA could be done until all dependent systems were sufficiently tested; thus, everything got later and later.
At the start of my career, no one knew enough to estimate anything, so we didn't, and things still were completed. At the end of my career, we were expected to make mostly pointless estimates that acted as placebos for anxious executives. Estimating things is not always necessary but sometimes valuable in reasonable circumstances. Perfect results are, however, highly unlikely.