Menu
10 security mistakes that will get you fired

10 security mistakes that will get you fired

From killing critical business systems to ignoring a critical security event, these colossal slip-ups will get your career in deep water quick

Getting fired from an IT security job is a rare event, but there are certainly ways to ensure or accelerate your own unemployment. I'm not talking about garden-variety mistakes here. After all, most IT workers create or live with lots of little mistakes every day. That's the nature of complex, rewarding work.

But it's also a large reason why IT doesn't do a better job at computer security. As systems become more complicated and companies more responsible for increasingly sensitive private information, the stakes for IT security keep escalating. With increased stakes comes increased pressure on those charged with fortifying corporate defenses.

Trust your skills, follow corporate directives, and concentrate on the basics, and you'll have a long career in IT security. Help your employer right-size its defenses in the right places, and you'll excel. But fall prey to one of the following mistakes, and you'll be looking for new work -- maybe a new career -- more often than not.

Colossal security mistake No. 1: Killing critical business functionality

Every security pro knows intuitively that derailing critical business functionality is a job-killer. You'd be far better off letting hackers stay inside the company than interrupt core business systems. This may seem antithetical to our mission as security pros, but after trying to help hundreds of companies become more "secure," you begin to realize that, from the company's perspective, security is not priority No. 1.

Even after a particularly nasty hacker attack, when the attacker has scooped up all the passwords, compromised the entire network, and downloaded confidential data, senior management will more often than not be more concerned about interrupting critical business systems than actually assuring that the bad guys are gone. Many seasoned security pros have experienced this, I can assure you.

In fact, there's a name for this strategy: Assume Breach, wherein the company accepts that malicious activity will forever be present in its environment and everyone should conduct business as usual anyway. It's a risky gambit, with senior management betting that whatever happens because of the hackers, the damage will be less than the cost of what would need to be done to ensure the hacker is gone forever (if that is even possible). The gamble works most of the time -- until the attacker causes hundreds of millions of dollars in damages, the public finds out, and the attack is directly tied to a detail that should've been investigated but wasn't.

But if you unexpectedly bring down critical business functionality longer than a day or so due to a new security process or device you put in place, you'll be shopping résumés faster than it takes to bring the network back up. Business rules.

Lessons learned: Learn what is critical to business and don't interrupt it unless not doing so will result in more damage.

Colossal security mistake No. 2: Killing the CEO's access to anything

CEOs are the kings of their kingdom. Regardless of whether they truly need access to a resource on or over the network, if you somehow remove that access, it's likely to threaten your job. I once got in hot water with a CEO because I blocked his access to pornography by enabling content filtering on a new firewall the company had purchased. I wasn't supposed to be "the Internet police," as he so eloquently put it.

I've seen CEOs yell at security pros simply because IT required the CEOs to put in a new password on their computers or put in a new password to access a high-risk application. CEOs for the most part want to open their laptops, click an icon, and have everything readily accessible, security be damned. Every IT security worker that has worked directly with a CEO has stories.

Lesson learned: Make access as easy possible for the CEO while maintaining the required amount of security.

Colossal security mistake No. 3: Ignoring a critical security event

If the recent Target breach has taught us anything, it's that ignoring a critical security event can be hazardous to your job. As it turns out, Target's security software had detected the Trojan software installation used to commit the hack, but the security team incorrectly deemed the event log message a false positive. Instead of alerting management that the company was under attack, everyone remained silent as the logs filled up with evidence of the infiltration. This single bonehead move cost Target hundreds of millions of dollars, forced the resignation of the CEO and CIO, and eroded customer trust in the brand.

But can any of us throw stones? Who among us hasn't opened up event logs, seen a bazillion events, and not had their eyes glaze over? Event-monitoring storage systems are measured in terabytes and petabytes precisely because event logging is an inexact science. Event logs are built to accumulate false-positive events to the tune of a million non-events for every real attack that gets logged.

Target's event-log mistake is a very public reminder some "false positives" are more important than others. In Target's case, the ignored event had recorded that a new executable was being uploaded and installed. Someone analyzing the logs explained it away as an expected point-of-sale system upgrade. The easy, but mistaken, explanation led the company to ignore tens of thousands of similar detection events.

If the CEO and CIO are gone, you can bet that the employee who told everyone to ignore the event is gone, if not the entire team. Management is all about choosing critical business functionality over security -- until the security event impacts critical business functionality. Then the pendulum swings swiftly, and heads roll for doing business as usual as the company coffers are plundered.

Lesson learned: Define the critical security events that are most likely to indicate malicious activity, and always research them to their ultimate conclusion when they occur. You can't chase down every potential false positive; know which ones are the deadliest and put in your due diligence.

Colossal security mistake No. 4: Reading confidential data

If the CEO is the king of the company, then the network administrator is the king of the network. I know many network admins who've allowed their godlike access control to tempt them into viewing data they didn't have permission to see. In military parlance, you need proper clearance and the need-to-know.

Over the past three decades, I've known several network pros who've not only looked at data they weren't authorized to view but bragged about it. That's stupid, and they shouldn't have been surprised when they were called in to turn in their keys and hand over company equipment.

There's one big caveat to all of this: acceptable use policies. I once consulted for a company that found out an IT security employee had read all of senior management's email. In this case, the "company" was a city, and senior management was the city council.

The employee had bragged several times to other employees about having the ability to read anyone else's email and was later caught reading the city council's messages. The employee was fired and filed a lawsuit for wrongful termination. The judge concluded that the acceptable use policies the employee signed did not specifically forbid this hacking instance; the employee prevailed and went back to work. Imagine having to work with that guy again.

Lessons learned: Don't access data you don't have valid permission to see, and consider helping data owners/custodians to encrypt their confidential data with keys that you don't have access to.

Colossal security mistake No. 5: Invading privacy

Invading a person's privacy is another surefire way to put your job on the line, no matter how small or innocent the incident may seem.

A friend worked at a hospital and once heard that a famous celebrity had checked in. The friend performed a quick SQL query and learned that the celebrity was in-house. They didn't tell anyone or do anything.

A few days later someone in the primary care staff leaked to a popular media site that the celebrity was being treated in the hospital. Management asked for an audit of who accessed the celebrity's records. The request came to my friend, who reported the results of the audit and self-reported their SQL query, though it had not been tracked by the information system. Management fired everyone who accessed the medical record without a legitimate reason. My friend, who would never have been caught if not for their aboveboard honesty, was fired without remorse.

Another friend who worked for a police department performed a records check on a babysitter he and his wife were considering hiring to watch their first baby. His access was later caught by a routine year-end random audit check. The auditor had selected a very small percentage of events to audit, and his illegitimate access was noticed. A 15-year employee who had once won Employee of the Year and was loved by everyone he worked with, my friend was fired. His pension was gone as well. If you met this guy, you would think he was one of the most honest, most ethical people you'd ever known. He made a mistake. He was human and he had the power.

Lesson learned: Privacy has become one of the leading computer security issues today. A few short years ago nearly everyone accepted that admins with access to a particular system might take the occasional look at records they didn't have a legitimate need to access. Those days are over. Today's systems track every access, and every employee should know that accessing a single record they don't have a legitimate need to view is likely to be noticed and acted on.

Colossal security mistake No. 6: Using real data in test systems

When testing or implementing new systems, mounds of trial data must be created or accumulated. One of the simplest ways to do this is to copy a subset of real data to the test system. Millions of application teams have done this for generations. These days, however, using real data in test systems can get you in serious trouble, especially if you forget that the same privacy rules apply.

In today's new privacy world, you should always create bogus test data to be used in your test systems. After all, test systems are rarely as well protected as production systems, and testers do not treat the data in test systems with the same mentality as they do data in production systems. In test systems, passwords are short, often shared, or not used at all. Application access control is often wide open or at least overly permissive. Test systems are rarely secure. It's a fact that hackers love to exploit.

Lessons learned: Either create bogus data for your test systems or harden test systems that contain real data as you would any production system.

Colossal security mistake No. 7: Using a corporate password on the Web at large

Hacking groups have been incredibly successful using people's website passwords to access their corporate data. Routinely, the victim is phished with an email that purportedly links to a popular website (Facebook, Twitter, Instragram, and so on), or the website itself has had its password database stolen. Either way, the bad guy has passwords that he bets people use elsewhere, including with company assets. Time to poke around and see what kind of access that earns.

One particular group has hacked many of the world's biggest and best companies using the same attack. (I won't mention the group's name because I don't want to give it additional exposure.) The hacker group has access to reams of confidential information and has purposely embarrassed the compromised companies by taking over their websites and social media accounts to make humiliating posts.

I know of several companies that proactively examine publicly accessible hacked password databases for names similar to their employees. (Some readers may be surprised to learn that hackers often place breached password databases in public places, then invite others to access them.) In every instance, the companies have been able to find at least a few shared passwords (or password hashes) and track them back to their now compromised employees. In some cases the employees were given additional education about password use. In others, where an existing "don't share your password" policy existed, the employees were reprimanded or let go.

Lesson learned: Make sure all employees understand the risk of sharing passwords between nonwork websites and security domains.

Colossal security mistake No. 8: Opening big "ANY ANY" holes

You'd be surprised at how many firewalls are configured to allow all traffic indiscriminately into the network and out.

This is even more interesting because almost all firewalls begin with the least permissive, deny-by-default permissions, then somewhere along the way an application doesn't work. After much troubleshooting, someone suspects the firewall is causing the problem, so they create an "allow ANY ANY" rule. This rule essentially tells the firewall to allow all traffic and to block nothing. Whoever requests or creates this rule usually wants it only for a short while to figure out what role the firewall might play in the problem. At least, that's the initial thinking.

Somehow these rules get left in place for a long time. Most environments I audit has at least one major router with "ANY ANY" enabled. Usually the firewall administrators and IT security people are shocked to learn that the "temporary" rule is still in effect. These accidentally permanent "ANY ANY" holes are usually discovered by auditors (like me) or by hackers. Unfortunately, discovery by the latter can lead to the unemployment line.

Lesson learned: Don't ever allow "ANY ANY" rules to be deployed.

Colossal security mistake No. 9: Not changing passwords

One of the most common mistakes that can put your job on the line is not changing your admin passwords for a very long time. My auditing experience has made this very clear. Almost all companies have multiple unexpired, years-old admin passwords. In fact, it's the norm.

Every computer security configuration guide recommends changing all passwords on a reasonable, periodic basis, which translates to every 45 to 90 days in practice. Admin and elevated passwords should be stronger and changed more frequently than user passwords. At most companies, admin passwords are long and complex, but almost never changed.

The work of changing these passwords doesn't have to be onerous. Lots of corporate software is available for automating the process of changing admin passwords, even creating temporary one-time passwords. Still, even in companies using this software, I find a ton of unchanged passwords.

What's the rationale behind this lackadaisical password practice? Consider the first mistake mentioned at the beginning of this article: interrupting critical business functionality. A software system can easily change passwords for admin accounts, but what happens when those same accounts and passwords are used within other applications and systems across the corporate network? If you change one, but not the other, you will often get a service disruption for as long as the two stay out of sync. Even if you change the password in both the account and the application, it may take a restart or reboot for the change to take place.

This operational complexity ends up pressuring admins and application owners to exempt their accounts from forced password changes. Fear of interrupting critical business systems begets foolhardy password practice.

Worse, admin passwords are often shared around the network, known by many people. These passwords should not be shared in the first place, but if they are, they need to be changed immediately anytime someone who knows the password leaves the company. Failure to follow this policy is the first step in enabling a fired, disgruntled employee to get back into the network to cause great harm.

Lessons learned: Periodically change all passwords, especially admin and service accounts. And always change passwords immediately upon separation of employment. Plus, don't use admin accounts and passwords to power your applications.

Colossal security mistake No. 10: Treating every vulnerability like "the big one"

One of the worst things you can do for your career is to cry wolf too often. Every year, a few of the thousands of newly discovered vulnerabilities become "the big one." This year, Heartbleed and Shellshock fit the bill, rightly deserving your attention and remediation.

But there will always be a significant number of vulnerabilities that colleagues and the media tout as the critical hole that will cripple your network and systems. It takes experience and skill to recognize what you really need to be worried about. If you run around panicking at every last "big" vulnerability, you risk being seen as someone who doesn't know their job, can't discern the real threats to your business, and shouldn't be taken seriously, even when your alarm coincides with a vulnerability your company should definitely pay attention to. Granted, crying wolf likely won't get you fired, but it can certainly cause roadblocks to your long-term upward mobility.

Lesson learned: Correctly prioritize vulnerabilities, and be careful not to undermine your credibility with colleagues by wasting their time with false alarms.

Related articles

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Facebook

Show Comments
[]