After 34 years of being a programmer I still get a kick out of people thinking there is a magical way to make software estimation work. All during my career I’ve heard person after person declare estimation is broken but that they have some way to make it work better.
They don’t. Correctly estimating how long something will take to code is unlikely at best. I’ve never been able to do it and every time I see someone try it’s always wrong.
In fact it’s difficult to even know how bad your estimations are. After all you are trying to guess how long something will take that is not perfectly defined to be done by imperfect people in circumstances that are likely to change in a generally continuous fashion. If it takes longer than you estimated, was it because your estimation was wrong or because you did not have enough information when you started or because the estimation time became a deadline that changed the development process itself? Software development is an iterative activity, where each step influences the following ones. Outside influences like management demands, market changes, personal changes and external changes like tools or framework updates all have an effect on how you get to the finish line.
Frankly any software estimation is indistinguishable from gambling at roulette. Estimating any kind of timeframe depends on how perfectly you understand all the factors which will influence that timeframe. Programming is not a simply linear equation but a highly complex and often chaotic activity.
I’ve read a lot of articles like Why Software Development Time Estimation Doesn’t Work & Alternatives which seem to think that collecting data will make estimations better over time.
It doesn’t. All estimated units of work are only vaguely related to the others, and your imperfect knowledge of each is rarely correlated all that much. If I were to write the same app multiple times I would get better at estimation but that’s only because I am doing the same exact thing. But one page in a web app may only be barely related to another one even if they seem similar, due to how they are connected to other pages and other areas of the application itself, or to other systems, or being built by different individuals. For example building a search page for hotels and a search page for flights are both searches yet the underlying data, API calls and details are radically different. Even the subtleties of design have a large impact on how long it will take to develop, as fitting details on the page may require radically different coding, as well as dealing with all the exceptions that might not have been obvious until you got that far.
Now today some people think that Scrum, with its generally two week sprints somehow makes estimations more accurate. After all you are estimating smaller units. Yet even here there are complicating factors. How tasks are broken down into estimable units, how estimations are collected (planning poker or whatever you use), whether or not metrics are collected on the completion and quality of estimations, and the honesty of the individuals all may affect how well the sprint estimates are done.
I’ve noticed in Scrum that the more you track how well people have been estimating and completing tasks (velocity reports, etc.) affects the amount of time that is estimated. If you often fail to complete tasks within the sprint you or your team looks bad, so you deliberately or inadvertently pad the next estimates. If QA also gives estimates their work is often crammed into the end of a sprint which makes everyone look bad as the task is not really complete. So in the following sprint planning task estimates go up which often leads to tasks having to be split—now each subtask will get the same treatment, making everything take longer. Of course this makes the reports look better but the shipping date gets further away. Estimating velocity based on some arbitrary story or function point calculation can also be modified to make it look like the estimate is being met. You wind up fixing the roulette table.
Building a fuzzy estimate on fuzzy guesses that are padded results in something that might be better suited to flipping a coin. Of course people want the comfort of knowing how hard something will be to make and when it will be complete—or even if it should be done at all—so estimates are demanded. Then the estimate becomes a deadline and a budget limit and people start being asked to work longer hours to “get back on schedule” or more people are added and the end result is that everyone looks incompetent when the estimate turns out wrong. Which it almost always is.
Long ago I came to the conclusion that building software takes as long as it takes, which seems flippant but is much more realistic.
When we were building Deltagraph from 1988-1993 each version started out with a couple pages of ideas for features from both the product manager at the publisher and our development team. We always wanted to ship a major release every 6-8 months since they had to charge for the floppies, manual, box and shipping (unlike today) so as not to burden the customers. Each feature was built iteratively with a lot of try and modify. As we got further along the cycle some features would be postponed until a future release, and some new ones added as market forces demanded. Towards the end of the cycle we would chop functionality that wasn’t ready, or wasn’t finished enough. Then we would complete what we had and that would go on the master disks. There were no estimations other than vague ones, it was easier to partial implement features to discover how complex they might be or how much extra effort it might take to complete them. We worked together with discussions every day as to where we were and what we thought we could still do. There were no iterations as in Scrum and we built every day so we got feedback every day.
At a job in the past decade there was a project that seemed fairly simple to me, not much more than a crud app, I guessed maybe two weeks for one person. Yet management insisted of detailing every last item. We met weekly for 6 months and a 150 page document was generated to substantiate how long it would take. In that time the project could have been done 10 times over. Sometimes it takes longer to gather enough knowledge up front to make a credible estimate than it is to simply make a wild guess and implement it, then iterate until you get what you want. In my post The Most agile Project I Ever Did we never could estimate how long it would take as we knew almost nothing at the start of the project, but it still got done in a reasonable time and made everyone happy.
Of course my two examples prove nothing but that you can build software without estimates providing you are willing to work iteratively and accept that it will get done when it gets done and maybe wind up with less.
People want estimates for various reasons including taking too much time or budget, or customer demanding a fixed bid contract, or deadlines imposed by management or external entities. A need to get estimates is driven by fear and having even an incorrect one might ameliorate those fears. Of course Scrum’s basic design requires that you can plan a sprint; otherwise there is little point to calling it Scrum and maybe that’s a good thing (I don’t care much for it). But if the quality of your development is entirely based on the accuracy of your estimations then you will likely be disappointed.
Estimating how long something will take is always difficult and the less information you have to go one the worse the estimate becomes, which of course describes programming in general; you rarely if ever have perfect knowledge. To me it’s a pointless waste of time but of course not everyone will accept that. In the real world if someone demands you estimate how long software will take to code likely you will multiply by 3 to be safe, the requestor will multiply by 3 to be safe, and upper management or the customer will assume it’s a deadline. In the end it’s a random number which hopefully gives you enough time to get it done before you get yelled at.
Better multiply it by 10.