Oh no, another zero day is out! No one goes home until it's fixed!
Sadly, we've heard this cry in the information security world far too often. Most recently, the Shellshock and Poodle vulnerabilities sent ripples of fear, uncertainty and doubt throughout the security community. Sure, there were many folks who reacted calmly and fixed the problems in an efficient and businesslike way. But far too many others panicked.
This has to end. Panicky reactions sow doubt and can cause more damage than they are meant to prevent. Let's all take a deep breath and calmly consider a better way.
First off, a reality check. Publishing a vulnerability changes nothing outside of telling you something you didn't know before. The vulnerability existed prior to publication, and it will continue to exist after publication. The only thing different is that more people know about it now -- but that doesn't mean it was unknown before publication. What has changed is that you now know about it, and that's a good thing.
Next, take the time to make sure that the fix you apply is really a fix. Developing a software fix to a security defect shouldn't be rushed. Yes, we all want our software vendors to be responsive to problems and deliver high-quality patches in a short time. But remember the old adage: fast, good or cheap -- pick any two. Developing a fix first requires the software engineers to deeply understand a problem. Then they need to develop a proposed fix, which then needs to be tested. We've all seen software fixes that don't work right or that cause other problems. And if the problem itself is an underlying design problem, the fix may well require significant redesigning. Hounding your vendors for a fix now may hurt you more than you know.
Here, then, are the underlying assumptions you should make when planning for software security:
" Many zero-day vulnerabilities already exist. We simply aren't aware of them yet, but every piece of software we use contains them.
" Every zero-day vulnerability is already known to our adversaries, before it is published. And they probably already have exploit tools for those unpublished zero days.
" Your antivirus, anti-malware and intrusion-detection systems have absolutely no knowledge of these zero days.
" How many of them are there? I have no idea, but assume them to be significantly non-zero in number.
With those underlying assumptions, you can take steps to improve your security operations.
Start by relying on tools that will give you disinterested truth. I'm thinking of things like NetFlow, which casts a spotlight on who is talking to whom on your network. And do not make the mistake of thinking that network monitoring is passé.
Next, consider using network monitoring sensors that are invisible on your networks. Network taps were big in the late '90s and early 2000s. They still work. They help isolate your network monitoring from the production network traffic. They help you see a clearer, truer picture of what's happening on your network.
With network monitoring and analysis in place, you need to think about how best to make use of the data. Don't be too quick to throw it out. Network analysis tools have gotten a lot better than what was out there in the '90s and early 2000s. It's easier now to sift through huge amounts of data in a relatively short amount of time. When a zero day is published, that data can be useful for taking a look back at what had been happening prior to the zero day being published.
You also need to have workarounds in place so that you're not entirely dependent on outright fixes when zero days pop up. Vendors can't really be expected to serve up the perfect patch as quickly as we would like, so how do we protect our networks while we wait? You need to have stopgap measures in place to prevent the zero day from getting in, or from affecting large numbers of systems if it does get in. That might mean turning a service off for a few hours until you can get and test a patch or employing network-layer isolation of critical services if you must leave some things turned on. The essential thing is knowing ahead of time what can and cannot be shut down for a limited time. And that means communicating with the business side of things so you'll know what will be acceptable when you're trying to avoid unnecessary exposure.
I guarantee that another vulnerability is on the way, and another one after that, and another after that, ad infinitum. And some of those vulnerabilities will be published before our vendors can develop a fix. None of us should be surprised when it happens. But if we're prepared, we can get through it and live to tell the story.
With more than 20 years in the information security field, Kenneth van Wyk has worked at Carnegie Mellon University's CERT/CC, the U.S. Deptartment of Defense, Para-Protect and others. He has published two books on information security and is working on a third. He is the president and principal consultant at KRvW Associates LLC in Alexandria, Va.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.