How Does Your Organization Handle Production Access?
In the past 10 years or so I have worked at, or as projects in, many different types of organizations and there are as many different ways to handle production security as you can imagine. Every kind of process from "we don't need no stinking production security" to "we won't even acknowledge we have production systems".
In most companies hardware and software systems are broken up into several categories, generally some kind of location development environment, perhaps a global development environment for initial testing, a higher level functional test area, a QA area and finally the operational production systems. The latter is where the company stores and handles whatever business data that supports or even is its business. For some entities this data is not central to the business, and its loss or theft (if there is even anything to steal) has no real impact on the bottom line. In more cases this data is vital to the continuing health and welfare of the business and care must be taken to avoid loss or alteration. In other companies this data contains private information of others, such as personal, financial or medical information, for which the loss of alteration could result in bad press, civil or even criminal penalties.
Yet I haven't found a consistent view on security at all. Some places have had tight security yet the data and systems were totally innocuous; other places adopted a "laissez-faire" view that allowed full access to anything anyone wanted despite a vast array of public trust information. Sometimes the level or security was based on the anality of the operational leadership rather than the nature of the systems and data.
It never fails to amuse me when I read about a big data loss, may it be unencrypted backup tapes or laptops that are lost, or hacked in systems that provide a wealth of credit card numbers (TJX Post). I always wonder how people got into positions of security leadership without being at least prudent, much less paranoid, about how information and systems at their organization are protected.
The types of security I have witnessed include these types:
- Total Lockdown - only a few operation people have access to all production systems period. Even production logs must be requested and are scrubbed before viewing.
- Mostly Lockdown - only a few operation people have access to all production systems but specific access is granted to other individuals who need it based on limited username/passwords with full audit trails
- Mostly Lockdown 2 - like the previous but allow more full access but from limited IPs, using electronic keys
- Careful Lockdown - only a few operation people have access to all production systems but other users have specific access rights to areas which have been set up for them (such as log directories)
- Come On In The Waters Fine - production is protected by passwords known to everyone. No audit trails are possible since it's a shared "secret" other than possible recording an IP address.
Financial entities such as banks and investment companies, healthcare companies and the like you would think always operated in a lockdown mentality but in my experience it hasn't been universal. In one place I saw total lockdown, but the reason had more to do with hiding the failures of the operations team than any real desire for tight security. I have often heard the excuse for a "Waters Fine" system that people need quick access to solve production problems; also since the outer security walls of the company are so solid that no possible issue exists with external attackers hitting internal systems. [I'll pause while you either laugh or gag].
When I was working on a big project at a large company (customer) around 1999, me and my project partner initially had access to both our test and the production systems (generally during most the 12 month development and beta roll out) which ran in a small data center on the same network. This system had both an external web application for the external customers and an internal one for the team that processed the data. The database was highly complex with both full audit trails and a workflow system. We were given access to the network with username/passwords/ip filtering/electronic key generator but otherwise allowed to do the work. Near the end of the project the company hired EDS to take over operations and all of our access was completely revoked, making it impossible to actual continue the project in any reasonable way. We were limited to calling an operations person who would type into a command line what we needed done, and have them read the result to us over the phone. Yet the two of us were the only people would were able to do anything with the code or database so progress came to a complete halt. So we did what enterprisey programmers always do; we put a backdoor into our application (protected by two levels of passwords and IP filtering) with full access to both databases. Needless to say it worked because EDS didn't care about the application or the project at all, and the folks we were building it for only cared about getting the system finished and working. Eventually the entire system was automated to the point that the backdoor was no longer necessary (and more sane access put into place anyway).
The point is that there is always a balance between the need for some access to production systems with the need to protect them.
So what can go wrong? Unfettered access to systems that should be secure can lead to:
I'm sure you can think of more. Yet every day you read of people who failed at the basics of how to properly protect their production systems or data.
Statistics show that most operational security problems come from inside your own staff, yet defending against the proverbial "hacker" from outside seems to pervade most corporate security discussions.
I can see the most reasonable path has to be tough protection combined with limited, fully auditable access. Sometimes it's hard to imagine how to do this since systems, software, databases and platforms are not consistent in how they provide access, authorization and authentication, much less provide easy single-signon interoperability. These types of challenges often lead to diminished security since it's considered "too hard" to overcome and thus nothing is ever done. Sometimes the cost of doing it right seems too high compared to doing the easy way (or not doing anything at all) and hoping for the best. Hoping is not a great security system. It's like leaving your house unlocked and hoping no one will open the door and take your stuff.
I'd love to hear stories from other folks on what people are doing in their companies.