Warning Signs Your Web Application Project May Fail
March 24, 2007A few more points to consider when building or managing a web application project.
1. It's been __ months and there's nothing to see
Seeing is believing, and in a web application it shouldn't take long to get something running. If a significant project has no visible progress in the first third of the development schedule if you are using SDLC/waterfall or the first few iterations of an agile project, then you should start to worry. If the team has been working for 6 or 9 months and there is no URL you can go to to view the current application running, then odds are the application isn't ever going to be complete.
You would think that this is obvious, yet I have seen it happen enough to think it's not that uncommon. Sometimes it's the development team that is incapable of making progress due to inexperience or lack of ability; even more commonly it's not their fault as the basic requirements keep changing (and not in an agile way) or management can't make needed decisions.
This isn't only a problem with web projects of course, but in a web project the browser already contributes a lot of functionality; all you need is some static HTML to demonstrate what is coming. I have heard people make excuses about building frameworks or firm foundations or working out detailed designs, but the only way for someone other than the developer to judge progress is by seeing something run.
The side benefit to seeing visible progress is that the customer can get an idea of what is being built and contribute to the shaping of the solution they are paying for. Even if you are not doing a agile project, having the customer be able to make comments on a live piece of software is a good thing (even if it will torpedo your schedule); after all it is meeting the needs of the customer that is the whole point of the project.
For developers I can only add that it is in your best interest as well to work on both the front and back ends of a project. Management and customers really appreciate seeing something work far more than dry project timelines and status reports.
2. You keep your source code where?
You would think in this modern era that everyone uses source code management systems (SCM), yet I have seen fairly large companies where source is "managed" on shared directories, or even kept on developer's hard drives. Even if they have an SCM there is often no attempt at organizing the repository, which is treated as nothing but a file dump. One place I worked at briefly on a contract had a manager who withstood 3 months of arguments before allowing the developers to set up an SCM system, arguing that using one would add too much time to project schedules.
Your SCM system should hold all source code, scripts, schemas, etc; basically everything necessary to build the entire system from scratch. It should also be backed up continuously and carefully (another thing I have seen people fail to do), and tested regularly. The repository should be designed (and the software chosen carefully) to make it easy to manage multiple versions of all projects for all contributors.
Many times only certain people will use a repository, such as the programmers, but other folks like DBAs, web designers, and QA will use other systems for their resources. It may not be as convenient for them but keeping all of the parts necessary to build and maintain a web project in an organized place is a huge benefit. Sometimes this can be a big pain, such as when I used Documentum in my projects I couldn't easily save the state of the DocBase in any meaningful way (and my employer was too cheap to pay for any real possibilities). Even keeping snapshots of external resources at regular intervals is better than nothing.
The question to ask is, if our place of business burned to a crisp, could we rebuild and continue work on projects if all we had was a backup tape of the source repository?
My personal favorite is Subversion but there are many others.
3. Management says 'let's worry about systems architecture when the app is finished'
Another wonderfully bad idea. Let's design a new car and worry about the engine when we're done. Planning on how your application will be served up should begin at or even before the coders get started. What usually happens when you just throw the application at your existing environment is lost connections, bad customer experience, crashes, and hours of painful overtime while you try to scale or fix things live. This tells your customers your application is broken (even if the code is perfect) since most people don't know or don't care how the internet works.
Today I tried to log into my Linkshare account and the login page didn't come up (different subdomain than the rest of the site); instead I get a warning from the browser that the server didn't exist. They fixed it after a while (not sure how long it was out) but still, an ordinary person might think they had done something wrong or the site was gone. Imagine you are a financial services company holding my money for retirement and your website goes up and down all day, how would I feel about my money and you?
I actually worked at one of these once and would not put my money there for anything. One of their major web applications did go down at random points every night (due to a manual backup that killed database connections with no warning) with no meaningful message to the users.
4. Management says 'let's worry about testing when the app is finished'
Ditto to the last section. QA doesn't start after the developers go on vacation. It starts the same day as the development does (or even earlier). Not only should testing be planned early but the testing environment (see the last section) should be developed early as well. The aforementioned financial services company failed to spend enough money on the QA environment to mirror the architecture of the production system, so when they rolled out a new customer portal, people started seeing other folk's financial data on their pages. Oops. The app worked fine in the limited QA environment, which had a single server. The production environment had two, and the problem involved id's being shared that were not unique in the cluster.
Oops! is not a development methodology.