Security Is Job 12

Today we read that Target had actually installed some sophisticated security software and both it and their desktop antivirus reported seeing something wrong while the famous hack was happening.

Apparently they ignored the warnings and brushed off any idea of investigating. It doesn't surprise me much, almost everywhere I've worked it's been the same story. Doing security right is hard but people find reasons not to do it.

One time I was in the Newark airport eating lunch and the fire alarm went off. Yet no one looked up. The server said it happened a lot but didn't know why. For 20 minutes the damn alarm went off before it finally stopped to a mock cheer. People learned to ignore the malfunction. If there was ever a fire lots of people would surely die. I couldn't help but wonder what the point of such a system was. It wasn't protecting anyone.

Doing security correctly is hard enough because you have to always be right, and an attacker only has to succeed once. They can attack 50 companies until they find one with an exploitable weakness. Failure has a low price for them but failure for the victim has a high cost. From what I can tell, people simply aren't willing to pay what it takes, either monetarily, organizationally, or with higher complexity.

Being an attacker is pretty easy since most companies who are hacked refuse to provide any details, I assume in the hope that the story will simply fade away and they can get back to the usual. Being hacked seems to be a cost of doing business and is simply a price the customer will bear. Offer a little monitoring, assure the press everyone will do it better in the future, and move on. I bet most hacks are unreported and go unnoticed.

Many of the stories that I've read where details did leak out amaze me. Adobe losing its entire user database along with source code from various apps was a real howler. I don't know if any more details were ever disclosed, but my guess is they had a temporary directory somewhere that people used as a scratch location. Someone backed up the user database as a precaution, and other people did the same from a code repository or their development computers and no one knew that it was wide open to whoever took advantage. I'm sure that their security people forced everyone to take the same stupid training classes we have at work and spoke in hushed tones to the CEO about how secure everything was. To quote Captain Kirk in one of the Star Trek movies "you make one little mistake."

But what was the real mistake? Failure to understand that your security is only as weak as the weakest link, or more likely that it's only a matter of time before you do make a mistake, and then what's the worst that can happen. If I was ever a Chief Security Officer, I'd back up the company databases's onto a thumb drive, then in a big meeting toss it out the window into the street and see what happens. If everyone runs out I know we're screwed. If everyone sits calmly then someone did the right thing. It's not about how perfect you are, it's about having an understanding about putting layers of security in so that even if the worst case happens your butt is still intact.

The people at Adobe did not understand this at all. By failing to correctly salt and hash the passwords, or encrypt the other information in a different way, they made it easy for the thief to steal not only the database, but likely steal from Adobe's customers as well. This isn't rocket science fiction, securing user information like credit cards and passwords the right way is usually the first thing mentioned in the security training we endure every year.

Apparently both Target and Neiman Marcus did have systems that should have warned them an attack was in progress, but they were ignored, brushed off or otherwise had zero effect. A classic story comes from the attack on Pearl Harbor where the primitive radar detected the incoming attackers but it was brushed off as a false alarm as a flight of B17s despite the direction being west. People are really good at rationalizing that something bad isn't really happening. Like the big bad wolf story where the townsfolk learned to ignore the warning until it was too late.

That's why you have to lock and armor the pilot's door in what had to be the most obvious duh moment ever in air travel. The TSA's security theatre won't stop the next attack, if indeed anyone ever tries the same thing again. I remember being in some  California airport standing in an interminable TSA line and was amazed to see a big cart full of boxes of baked goods being wheeled through a special bypass; I imagined at the time they might be full of hand grenades. That's why security has to be about layers and not about locks. No matter how many doors you lock down there is always another way in. An attacker only has to probe until they find it.

At a healthcare place I worked I discovered that the production database and server usernames/passwords were kept in a text file in the code repository (there was also no auditing on logins) and reported it to the CTO. He didn't know it but didn't seem concerned as "we trust our employees" and "we passed a HIPAA audit." I can just imagine the auditor assuming no one would be stupid enough to do this and didn't even ask.

That's why security audits are often a joke. Someone sits in a conference room and collects a lot of documents and then months later creates a document saying everything is fine. I've never been anywhere that failed such an audit. I would think that if you don't fail a lot the auditor isn't doing a very good job. Of course people hate auditors and likely only want to hire the easy ones.

Security is not only hard it requires honesty, a willingness to be critical or even a pain in the ass if need be. What you usually get however is often security theatre. One place I worked turned on the zip file checking in the companies' virus checker. We were a mostly Java shop and now every compile or run took 20 minutes until your computer responded again. For three months little work got done. The head of operations was determined to stop native code viruses running in jar files, no matter how we told him that couldn't happen. Eventually the lack of progress made upper management finally stop it. This was security by stupidity.

The same company didn't encrypt customer credit card numbers as the 4GL team didn't want to have to update their apps so instead a committee was assembled to study whole disk encryption. I don't know if they ever came to any conclusion. Plus the company had no internal firewalls, so the production servers ran in the same network zone as the employees; they actually ran a virus checker on the Oracle database servers.

Another place had us take the usual security classes, and the first item was password security. Consequently we all discovered that the company actually emailed passwords to customers when they forgot them, so clearly they were stored reversibly. Customer convenience was more important than security.

What is the point of training people in security and then doing the opposite?

Companies have to start opening up about failures when they do get hacked, so at least someone will learn what not to do, and hopefully make the attacker's next job that much harder. Leadership has to be made to understand that security is not job #12, or something to be buried in the next 5 year plan. Decision makers on security need to be people who understand how to do it right, and not just do irritating things that make it look like something is happening. Security is hard but a properly comprehensive plan can make it likely that when you do get hacked (and you will) it isn't a big problem. It's not just your company that might get bad press for a little while, it's your customers that will possibly suffer for the rest of their lives.  It's not what you say is important about your security, it's what you do and how you plan.

Now where's that thumb drive?