Menu
Cyber CIRTainty

Cyber CIRTainty

Perhaps the subtlest damage caused by last year's cyber-attacks on US government agencies and high-profile organisations like eBay and Microsoft was the way they made only high-profile companies and government agencies look vulnerable to cyber crime. With publicity about such incidents raging, it's all too easy for complacent companies to imagine only large organisations and governments need to think about covering their assets with state-of-the-art intrusion detection and protection systems, or that preventive security is just a money-spinner for IT security vendors.

It goes without saying both assumptions could prove dangerous, as the devastation that Melissa, Love Bug and other viruses left in their wake in the past couple of years should show. In addition, it's not just viruses that are getting increasingly active; hackers are too: in the third quarter of 2000, there were 2126 incidents directly reported to AusCERT compared to 352 in 1999 and 550 in 1998. Of those, more than 90 per cent were probes and unsuccessful attacks. This suggests a greater emphasis on scans and probes for purposes of information-gathering activities and attempts to exploit known vulnerabilities in systems, as opposed to simply looking for them. These are just the reported probes and attacks - no one can confidently estimate how much hacking activity is really going on.

That's why more and more Australian companies are setting up an in-house cyber-incident response team (CIRT) as a valuable resource for continual troubleshooting, reporting, response, training, updating, and support for their information security concerns. GartnerGroup analyst Joe Sweeney says no amount of solid planning is enough to prevent enterprises being attacked by malicious external sources, most probably through exploiting recently discovered software bugs. "When attacks occur, enterprises must have the capacity to detect them as quickly as possible and the intrusion-detection technology to assist with the process," he says.

That's where the CIRT comes in. The CIRT should be both skilled in investigating technical anomalies and capable of establishing a virtual crime scene. It should also have a working relationship with local, state and federal law-enforcement agencies. In short, an effective CIRT not only helps organisations avoid attacks and know when they are under attack, but also determine exactly what the attacker has done or is attempting to do. Most important of all, it lets them decide how to thwart that attack and recover from any damage done.

"From an availability perspective, the key is in planning to avoid potential incidents as much as possible. In detection, containment and incidence responses, the key is to recover as quickly as possible after an event occurs," Sweeney says.

A CIRT is Born

While there are some significant stumbling blocks to overcome during the start-up phase of a newly-formed CIRT, Sweeney says in the long term the CIRT's presence will enable the enterprise to differentiate itself from the competition in ways that will only grow as the threat levels expand. That can only be for the good. However, according to Co-Logic (New Zealand) managing director Arjen de Landgraaf, the business thinking of far too few organisations has caught up with today's reality.

"Can you imagine in our current Western society a house without a burglar alarm; or, for that matter, a car? Nowadays we would consider it virtually impossible. However, we have not thought of implementing the same for a business system. If anyone gets the key to the front door (a user ID or password), they are in and the business systems and information are up for grabs. Intrusion detection (that is, is the person that has a key and uses it to enter our business system right now actually allowed to do what they are doing?) is critical, as without it you don't even know someone is in."

Whether caused by insufficient knowledge and expertise or under-resourced IT departments or third-party systems integrators, errors in configurations, set-ups and management of system installations and software implementations are as serious a problem as software suppliers who do not properly design, write and test their systems, de Landgraaf says. Just consider how many default passwords are built into systems delivered for system engineering purposes, as well as "black boxes" like routers, switches and PABXs.

In response, many organisations have set up some form of CIRT, however primitive. Dean Kingsley, a partner in Deloitte Touche Tohmatsu, has advised many Australian organisations on IT security and says existing in-house CIRTs range across a wide spectrum. About one-third of leading organisations have recognised the need to monitor their networks for unauthorised activity and have implemented large-scale intrusion-detection systems, he says.

"Now obviously, if you've implemented one of those, you have to have a monitoring capability. Even if it's just one individual getting paged at two in the morning, you have to have someone actually giving the alerts, getting paged or whatever. So you've got that end of the spectrum, obviously all the way through to much more sophisticated response mechanisms," Kingsley says.

"But most organisations tend to be down at that [low] end of the spectrum. They've got intrusion detection technology implemented and they've got people who are responsible for monitoring and responding to instances as they occur. I think you'll find there are an awful lot of organisations in Australia that are doing that for themselves," he says.

At Your Peril

Most government agencies, all of the telcos and most banks and insurance companies have set up reasonably extensive CIRTs. Nevertheless, while some other organisations are starting to roll out their own CIRTs, including most companies using intrusion detection technology, too many have still to take this vital step. And the failure could cost them dearly.

"A number of things can happen if you don't have a CIRT," says Steve Laskowski, managing director of Internet Security Systems Australia, whose RealSecure intrusion detection technology is used by many Australian companies. "The first is you won't be able to differentiate between a real attack and a false alarm. That is primarily what the CIRT's role is: determining whether the attack is real or not." Even with the most state-of-the-art technology available today, he says, there is still no way to know with 100 per cent certainty when you are under attack. That means there is still an art to determining whether an actual attack is happening or not.

However, if the organisation is attacked, the CIRT has another vital function: managing communications with the outside world when things go wrong.

Laskowski cites Microsoft as an example of a company that did a very poor job with a recent attack is Microsoft. "They made two fundamental errors," he says. "They had too many people speaking to the press and they had an inconsistent story which, because it was inconsistent, many people didn't believe.

"One of the most important roles of a CIRT is to have a single point of contact to the outside world about what is going on if indeed you have to go public," he says.

Dual Challenge

Establishing and running a CIRT presents a twofold challenge, with both technology and people issues to be overcome.

"As far as the technology challenge goes, it's very difficult to get technology that can distinguish between real events that are of concern and events that are not of concern," Kingsley confirms. "Even the market-leading product - ISS's RealSecure product - produces a lot of false positives or alerts that, when investigated, turn out not to be someone trying to break in."

To a large degree, this is because most break-in activity is designed to look like normal network traffic. As a result, most organisations have to set the technology to be fairly coarse, ensuring it picks up numerous incidents, only a small number of which turn out to be hacking incidents. That technology challenge turns into a human challenge because organisations have to heavily tune their systems over time to ensure that more and more of the noise gets filtered out. What is left is therefore increasingly more of the actual incidents in which they are interested.

"The true human barrier, though, is partly skill-set and training," Kingsley says. "There still aren't enough people who have the knowledge to distinguish this normal traffic from hostile traffic. It is pretty hard to tell the difference and it does take a reasonable degree of technical competence to sort it out."

Team members must also be trained to decide what to do in the event of a real incident, depending on the desired outcome. "If you just want to stop someone doing it, obviously you can just turn off the tap; whatever path they use to try to break into, you shut that path off," Kingsley says. "More interestingly I guess, [is that] organisations usually want to know who's doing it and why they're doing it and want to prosecute them so they don't do it again. When you're moving down that legal path, that's when forensic knowledge - what's evidence and what are the current laws - comes to the fore."

An Ounce of Prevention

The first step in preventing a cyber-incident from devastating the enterprise is to plan a solid CIRP (cyber-incident response plan), Sweeney says. The plan should require adhering to minimum security policies and practices in all systems, networks and applications, and describe the appropriate level of response in the event of an incident.

It should also recognise that most incidents start within the organisation. That makes it crucial that the organisation understand the process for conducting an investigation, collecting and preserving evidence properly, and responding correctly to both legal and technical requirements. Experts say CIRT members must be able to collect and handle evidence, prepare a case, testify in a legal proceeding, and be capable of determining the root cause of a security incident.

Yet such skills are not so easy to find. "About the only common attributes between existing incident response teams are that they are under-funded, under-staffed, and overworked," wrote former AusCERT team member Danny Smith in a paper called "Forming an Incident Response Team", which is still on AusCERT's Web page.

Smith says determining the appropriate number of staff to employ requires a fine balance between the expected (and probably as yet unknown) workload and the budget constraints. AusCERT finds one full-time technical person can comfortably handle one new incident a day, with 20 incidents that are still open and being investigated. "Anything over this rate does not allow for any other involvement than incident response. This may have many negative aspects," he says.

As well as technical staff, the team needs management, administrative, and clerical support, whether the services are contracted from other organisations, or people are employed to fulfil these roles. Smith says those staff must be looked after well.

"The biggest issue facing staffing levels is staff burnout. It is a problem that if staff are continually placed under stress by being on 24-hour callout, and working long hours on complex incidents, their mental and physical health may begin to suffer. It is highly recommended that to operate on a 24-hour callout basis, a minimum of three, full-time staff are required," he says. "Staff should be rotated through the high stress positions and, when they are rostered off, they should be given the opportunity to pursue other less stressful activities such as tool and course development. However, staff should always be available to assist when the emergency load becomes excessive."

Smith says small teams in particular can't possibly expect to have a complete set of skills such as those required to manage today's complex and diverse array of computer hardware and software. Under such circumstances, there should be no shame in admitting to the constituency that the team does not possess the necessary skills to tackle a certain problem. A team in this situation can cultivate contacts within the community that do possess the required skills.

"Develop a level of trust with these contacts and use them from time to time when the team's skills are inadequate," Smith advises. "Be careful of always making use of the same people - as they become less reluctant to help over time (due to other work commitments) - and risking the wrath of their management. In general, people are willing to assist in true emergency situations, but are more reluctant to devote time to more mundane situations, or bolster the ranks of the IRT for free if the team is inadequately staffed."

However, manning the CIRT may be the relatively easy bit. Sweeney warns that forming a CIRT, and its subsequent implementation, is a project that always awakes significant suspicion from every business unit, for it requires a dramatic change in roles and responsibilities - particularly within independent business units. He says in all cases, the business unit needs to surrender territory and absolute control needs to be relinquished to a group that is seen as an intruder intent on upsetting the status quo.

"The most pressing issue a CIRT faces is establishing the need for a strategic vision that transcends the ‘We don't have a problem; what are you worrying about?' mindset," he says. Typically, this is where most initial CIRT projects fail.

Once agreement is won to set up the CIRT, it needs both mission and goal statements. Sweeney recommends the CIRT mission statement should be: "To provide a resource capable of defending the technological infrastructure against threats and anomalies that may emanate from a malicious source."

The goal statement might read: "Determine if the incident under evaluation originated from a malicious source and, if so, first contain the threat and then isolate the enterprise from the attacker."

Information Sources

There are a number of excellent sources of public information providing guidance in developing a CIRP and creating a CIRT.

They include the CERT Coordination Centre at the Software Engineering Institute of Carnegie Mellon University (www.cert.org). Its Handbook for Computer Security Incident Response Teams can be found at http://interactive.sei.cmu.edu/Recent_Publications/1999/March/98hb001.htm).

There is also the Forum of Incident Response and Security Teams (FIRST). This is a non-profit volunteer group that attempts to foster cooperation and coordination in incident-prevention across diverse sectors, prompt rapid reaction to incidents, and promote information sharing among members and the community at large (www.first.org).

The SANS Institute (www.sans.org) has an excellent publication called Computer Security Incident Handling, which provides a checklist format for responding to a cyber-incident. - S BushellSkills BreakdownGartnerGroup analyst Joe Sweeney identifies the following skills, knowledge and abilities as important to the CIRT:

Ability to look beyond the obvious

Superior technical skills and lots of common sense Hands-on investigative/forensic experience Ability to act consistently under extreme pressures and never make matters worse Tendency to think like a criminal or disgruntled employee Understand "big picture" ramifications.

The composition of a CIRT will always be a function of the size of the enterprise it represents, Sweeney says. In enterprises of more than 60,000 employees supporting an extremely diverse infrastructure, he says, an optimal CIRT would consist of 18 full-time employees with expertise in the following areas:

Project management (this person would act as team leader) A corporate attorney specialising in human resource and litigation issues A corporate attorney specialising in cyber- and criminal law A human resources separation specialist A cyber-investigator specialising in intranet/Web systems; a cyber-investigator specialising in Internet/extranet systems Technical experts in the fields of Unix, Windows, Linux, mainframes, minis, network firewalls and routers, database design, storage systems, telecommunications and phone systems, and PERL programming A contingency planning and disaster-recovery specialist, and an audit and compli- ance specialist.

Sweeney says the organisation can achieve optimum impact, in terms of effectiveness and efficiency, if these positions are temporary assignments lasting between 18 and 24 months. "This approach will eventually result in a greater awareness among key personnel of the challenges that a CIRT faces," he says.

Start-up costs for the such an 18-member CIRT would be about $US150,000 per employee in the first year. This figure includes access to a 24x7 command centre and test lab; specialised training in forensic and chain-of-custody issues; high-end laptops configured as network monitoring devices; high-end desktop platforms with a minimum of 200GB of storage, designed to be used for log analysis; a secure office environment; and expenses to cover attendance at around eight-to-10 security conferences in the first year.

"With the additional cost of salary and benefits typical for large metropolitan areas, the initial start-up budget for a full-time, 18-member CIRT could easily approach $6 million," Sweeney says.

Scope of Operation

According to former AusCERT team member Danny Smith, the CIRT should early on address the question of the types of incidents the team is to handle and those which it will ignore. "These questions must be answered and those answers communicated to the community," he says.

The types of incidents that may or may not be handled could include:

Intrusions

Software vulnerabilities

Requests for security information

Requests to speak at conferences

Requests to perform on-site training

Requests to perform on-site security audits Requests to investigate suspected staff Viruses International incidents Illegal activities such as software piracy Requests to undertake keystroke monitoringA decision must also be made about the level of assistance that will be provided. Will the team merely forward notification of security incidents onto the affected sites, or will they work completely with the site to determine the extent of the intrusion and help them to better secure their sites?

Justifying the CIRT is easier if savings to the community are identified. This is typical of any risk analysis situation, where the costs of reducing the risk should not exceed the costs of the potential loss. Possible savings could include:

Real money costs in staff time in handling incidents Costs of staff gathering and verifying security information Cost opportunity costs Loss of reputation (or gaining a reputation!) Threat to "sensitive" data.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about AusCertCarnegie Mellon University AustraliaCERT AustraliaDeloitte Touche TohmatsuDeloitte Touche TohmatsueBayInternet Security SystemsISS GroupMellonMicrosoftSANS InstituteSecurity SystemsThe SANS Institute

Show Comments
[]