Too often we focus on the new and latest infosec darling. But many times, the tried and true is still relevant.
I was thinking about this when a friend recently sent me a copy of Bruce Schneier's Beyond Fear book, which was published in 2003. Schneier has been around the infosec community for decades: he has written more than a dozen books and has his own blog that publishes interesting links to security-related events, strategies and failures..
His 2003 book contains a surprisingly cogent and relevant series of suggestions that still resonate today. I spent some time re-reading it and want to share with you what we can learn from the past and how many infosec tropes are still valid after more than 15 years.
At the core of Schneier's book is a five-point assessment tool used to analyze and evaluate any security initiative – from bank robbers to international terrorism to protecting digital data. You need to answer these five questions:
- What assets are you trying to protect?
- What are the risks to those assets?
- How well will the proposed security solution mitigate these risks?
- What other problems will this solution create?
- What are the costs and trade-offs imposed?
You'll notice that this set of questions bears a remarkable resemblance to the IDEA framework that RSA CTO Dr. Zulfikar Ramzan presented during a keynote he gave several years ago. IDEA stands for creating innovative, distinctive end-to-end systems with successful assumptions. Well, actually Ramzan had a lot more to say about his IDEA but you get the point: you have to zoom back a bit, get some perspective, and see how your security initiative fits into your existing infrastructure and whether or not it will help or hurt the overall integrity and security.
Part of the problem is as Schneier says that "security is a binary system, either it works or it doesn't. But it doesn't necessarily fail in its entirety or all at once." Solving these hard failures is at the core of designing a better security solution.
We often hear that the biggest weakness of any security system is the user itself. But Schneier makes a related point: "More important than any security claims are the credentials of the people making those claims. No single person can comprehensively evaluate the effectiveness of a security countermeasure." We tend to forget about this when proposing some new security tech, and it is worth the reminder because often these new measures are too complex. Schneier tells us "No security countermeasure is perfect, unlimited in its capabilities and completely impervious to attack. Security has to be an ongoing process." That means you need to periodically audit and re-evaluate your solutions to ensure that they are as effective as you originally proposed.
This brings up another human-related issue. "Knowledge, experience and familiarity all matter. When a security event occurs, it is important that those who have to respond to the attack know what they have to do because they've done it again and again, not because they read it in a manual five years ago." This highlights the importance of training, and disaster and penetration planning exercises. Today we call this resiliency and apply strategies broadly across the enterprise, as well as specifically to cybersecurity practices. Managing these trusted relationships, as I wrote about in an earlier RSA blog, can be difficult.
Often, we tend to forget what happens when security systems fail. As Schneier says early on: "Good security systems are designed in anticipation of possible failure." He uses the example of road signs that have special break-away poles in case someone hits the sign, or where modern cars have crumple zones that will absorb impacts upon collision and protect passengers. He also presents the counterexample of the German Enigma coding machine: it was thought to be unbreakable, "so the Germans never believed the British were reading their encrypted messages." We all know how that worked out.
The ideal security solution needs to have elements of prevention, detection and response. These three systems need to work together because they complement each other. "An ounce of prevention may be worth a pound of cure, but only if you are absolutely sure beforehand where that ounce of prevention should be applied."
One of the things he points out is that "forensics and recovery are almost always in opposition. After a crime, you can either clean up the mess and get back to normal, or you can preserve the crime scene for collecting the evidence. You can't do both." This is a problem for computer attacks because system admins can destroy the evidence of the attack in their rush to bring everything back online. It is even more true today, especially as we have more of our systems online and Internet-accessible.
Finally, he mentions that "secrets are hard to keep and hard to generate, transfer and destroy safely." He points out the king who builds a secret escape tunnel from his castle. There always will be someone who knows about the tunnel's existence. If you are a CEO and not a king, you can't rely on killing everyone who knows the secret to solve your security problems. RSA often talks about ways to manage digital risk, such as this report that came out last September. One thing is clear: there is no time like the present when you should be thinking about how you protect your corporate secrets and what happens when the personnel who are involved in this protection leave your company.
This post was sponsored by RSA, but the opinions are my own and do not necessarily represent RSA's positions or strategies.
# # #
Join the #TalkingDigitalRisk conversation on Twitter and social media by following @RSAsecurity
David Strom is an independent writer and expert with decades of knowledge on the B2B technology market, including: network computing, computer hardware and security markets. Follow him @dstrom.