The Myth Of Reusability
The story always begins with a complaint. Executives complain that software is taking too long and costing too much to build.
Some bright person suggests that money could be saved if software were built to be reusable, thus allowing slightly more expense to result in components that could be leveraged in future development, saving enormous money and effort.
Executives jump on it, each pushing the idea as their answer to cost and performance issues, and wanting to reap the benefits of being seen as proactive.
Everyone is encouraged to identify and build useful components, or to take existing code and make it more reusable. New projects jump on the benefits and build new applications that seem to fly into production.
A few people note that the components are almost what they need and just require a few extra tweaks or options; perhaps with some abstraction a broader set of uses might emerge. The components clearly need someone to support them so a new organization is created to manage them and ensure quality and usability.
Soon the new group begins to control access, since just allowing multiple groups to make changes they need might impact others’ use. A board is created to monitor and approve use of the components to ensure all are used correctly.
The growing dependence on this reusable code is further made more complicated by a hierarchy of dependencies including on various open source libraries which themselves have a web of dependencies necessitating more careful control. Changes take longer and longer as any change have to be tested in a whole host of applications, and new features take a back seat. Open and closed source dependencies become more and more out of date creating a storm of slower and slower updates.
Now major complaints appear that software is taking too long to build and too much money. Market opportunities are lost since newer dependent libraries and OS functionality can’t be utilized. Executives who supported the idea in the beginning now have to look the other way and mumble something about timelines only being delayed to ensure reusability. Eventually that begins to sound hollow.
Now individual groups begin to break away and start to build apps independently of the components, and ship much faster than the dependent brethren. Soon the components are abandoned except for those unable or forbidden to update. Now executives that back the successful teams get promoted and the rest leave the company.
Sound familiar? Maybe not exactly like this but often in large companies this type of scenario plays out. It’s not that reusability is a bad thing by itself, but trying to make everything reusable across an entire company is silly.
The problem is that the world is not standing still, and when you start to build reusable software on a large scale it eventually becomes not a benefit but an obstacle. Imagine deciding a few years ago on building reusable web components on top of Backbone. At the travel company 4 years ago Backbone was the JavaScript framework we had picked for our mobile web apps. Today of course no one uses it. If you built a lot of components around the same time for iOS you would have assumed iOS 6 design which of course changed radically with iOS 7. Sure you could have been really careful and built perfectly updatable code but the above scenario is far more likely: too many dependencies that can’t be updated without breaking too many existing applications without massive testing.
Some software lives a lot longer, of course you could point to the Cobol applications from the 1960’s which hardly ever change, or the airline reservations systems from the same era that still track your seat. These of course are terrible examples for anything, they only work with enormous difficulty and expense and rarely get changed because of it. You don’t want your company software to be so hard to update that your competitors run rings around you.
Of course sometimes your reusable software is a market benefit; it’s so hard to work with that only your high priced consultants are willing to work on it, and then profit!
Often new programming languages, frameworks, techniques and ideas make building software so much easier than trying to reuse something that really is creaky and old and inflexible. You could benefit from Swift instead of dragging all that 5 year old Objective-C no one understands anymore around. You could build in Rust instead of all that PHP. You could build in Unreal instead of that 18 year old game engine (I know this one personally). All new things are not always better, but this industry continuously comes up with incredible new opportunities.
Sometimes doing it anew is the right answer. You can’t always claim that some existing components shouldn’t be rewritten because you are familiar with their quirks and bugs and writing new ones might be more risky. Or you might actually wind up with something easier to work with and ship your products before the other people do. As the pace of change increases the risk of the old dragging you down goes up and the risk of the new becomes far easier to take on.
One thing I’ve said many times in regards to your personal learning of the new, the giant Technology Steamroller running eternally behind you waiting for you to falter applies to companies large and small as well. Nail yourself to the floor and you will eventually wind up a design in the asphalt. Of course you could go too far ahead and get burned as well, new is not guaranteed to be better but if enough new things appear some of it will be worth using.
The challenge is to find ways to make development more cost effective and productive but not generate massive boat anchors. Dealing with a changing world is not easy but you have to keep moving, and sometimes that might mean sacrificing reusability in order to maintain flexibility.