The dotcom tsunami left a glut of hosting facilities in its wake, with many now being rebadged as data or disaster recovery centres. But what seemed like a safe bet in the 90s – a mirror site just a stone’s throw away – looks like more of a gamble post 9-11.
At first glance, security at the data centre — billed as the most impenetrable Australia could offer and sited across two floors of a building at the very heart of Sydney’s CBD — was “spectacular”. There were multiple levels of physical security ranging from video surveillance to state-of-the-art iris scanning devices; no employee could get in without photo ID, and every corridor bristled with guards and fire extinguishers.
But when the owners of the centre invited a prominent security analyst to test their facilities, they were in for unpleasant news.
It was shocking enough that the analyst was able to poke his nose into every corner of the facility after identifying himself on security forms as Osama Bin Laden and giving his address as a tent somewhere in Pakistan. (“You should have seen the look on my client’s face later when I said: ‘And look at this . . . ’,” the analyst told CIO.) Far worse — because less easily addressed — were the other vulnerabilities he picked up, not least those created by the data centre’s very location.
“Now I have to say, the facility was pretty spectacular,” the analyst says. “They had all these biometric iris scanning access devices and all the rest of it, but if you don’t police the low-end policy end of it, then you still have a problem. They actually made the mistake of pointing out which computers in their facility handled which clients, and how significant that was. I couldn’t believe it. This was supposed to be super security and they had no idea.
“I said to them: ‘Okay, you’ve got these two floors and that’s really lovely, and fire extinguishers every 10 centimetres and all the rest of it, but what if I just decide to turn a hose on upstairs? And what about your pipes back into the infrastructure in the basement? That’s going to be the way to take out your facility, isn’t it?’”
But perhaps the greatest area of potential concern of all was the centre’s very location. There is no doubt data centres located in the CBD are likely to be more vulnerable to terrorist attacks than those in outer suburbs, the analyst says. “That makes having a hot backup site that is outside of the CBD, particularly in the Sydney context, vitally important. And if you were checking the data centre owner you’d check what redundancy they had not only in fibre, but also whether they had a satellite backup.”
Under their duty of care, directors and senior executives are obliged to increase shareholder value through prudent investment and asset protection. CIOs are obliged to help them meet these obligations by ensuring the physical protection and assured business continuity of valuable company information. And one of the considerations those CIOs must take into account in the new world ushered into being on September 11, 2001 is the location of the data centre, now that the primary concern for corporate disaster recovery has shifted from natural disasters to potential terrorist attacks within CBDs affecting buildings and infrastructure randomly over a large radius.
After all, more than 406 buildings were impacted and eight demolished in the terrorist attacks on the World Trade Centre, but the total disruption was far wider, with damage to infrastructure and communications including subways, roads and bridges. Access was further restricted on security grounds. And disaster piled on disaster for some imprudent organisations based in the World Trade Centre whose redundant systems were in the other tower. With many data centres still located in CBDs across Australia — typically a legacy of the dotcom era — how many companies could assure business continuity in the wake of the unthinkable: a major terrorist attack on the heart of one of our major cities?
Dr Adam Cobb, a national security expert and director of stratwar.com, a Sydney-based defence consultancy firm, thinks in the current climate it is “vitally important” for organisations, and particularly Sydney-based organisations, to have a backup data centre outside the CBD.
Cobb says terrorists planning an attack will inevitably opt for “the big targets” — bridges, tunnels, national landmarks like the Opera House — because it is those targets that will make the international news, and from their point of view anything less is pointless. Since most of those big targets will inevitably be located in or close to the CBD, the CIO needs to balance their vulnerability against the threats to the organisation.
And when it comes to the preponderance of data centres in places like the Sydney suburbs of Ryde and West Pennant Hills, Cobb warns organisations need to be aware of the temptation that preponderance itself may offer terrorists looking to impose maximum corporate damage.
“There does seem to be a congregation of [data centres] in Ryde,” Cobb says. “So you look at the Ryde Exchange — and these maps are still spectacularly available — and how hard it would be to take it out. And you look at what format they back things up on, right down to what secondary systems they rely on. So, for example, do the data centres have airconditioning ducts that are exposed and easily accessible? If you manage to disable the airconditioning, because computers need to operate at a certain temperature and all the rest of it, then that can cause problems. And so on and so forth — there are a number of different ways to crack the nut, as it were. At the end of the day, there really is no perfect solution, and it’s a risk assessment process,” he says.
Recover CEO and Business Continuity Institute Australia representative John Worthington agrees that should a terrorist decide to bomb the Ryde Exchange (or any other exchange where multiple data centres are clustered) they could do an enormous amount of damage. “You probably should look at having your data centre where others are not,” he says.
Organisations should also check that the data centre is located outside the power grid for the area the business is located in. “Power outage is one of the most common factors that will drive you to your data centre. So what you don’t want to find is that you have a power outage affecting your office and your data centre, which might only be a couple of blocks away,” Worthington says. “The thinking at the moment is that the data centre should be located probably no closer than say 10 kilometres from your current site, but that depends on the power grid locations also.”
Many data centres have been located outside city centres,” notes Gartner Australia vice president and chief of research John Roberts. “It’s a decision based on relative real estate costs, availability of 24x7 operating staff, security, telecom infrastructure costs and so on. For example, IBM has operated two major centres — one in Ballarat, the other in Pennant Hills that I believe can back each other up — and then they have others they have inherited as they do outsourcing deals.
“Part of minimising the risk,” Roberts says, “is ensuring there are at least two completely separate channels out of the data centre, running to two different exchanges.” And security-minded organisation can go much further, he says. In earthquake-prone Japan and California, disaster recovery plans would normally involve having two separate data centres in different cities effectively mirror imaging each other.
So b-sec director Oliver Binz cites approvingly the arrangements of NEMCO (National Electricity Market Management Company), which has co-primary sites in one location and then a third site that can kick in at any moment.
“Of course there are the stories you’ve no doubt read about people having their primary site in one tower and their secondary site in the other tower at the World Trade Centre,” Binz says. “But realistically, if you’ve got your sites spread across Brisbane/Sydney or Melbourne/Sydney, I think probably from a risk point of view that that would be quite acceptable.”
Optus has recently gone one better, moving to provide its customers with access to three data centres — two in Sydney at Ultimo and Rosebery and one in the outer Melbourne suburb of Sunshine — all located on top of Optus exchange sites on its Internet backbone and all connected for extra security. The new facilities replaced two in the Sydney suburb of Ryde and one on Melbourne’s St Kilda Road, which have now been reserved purely for telecommunications use.
According to Optus Business managing director Peter Kaliaropoulos, there are two components to quality when it comes to the data centre: security and diversity. “There’s no more secure environment in a telco infrastructure apart from the exchange; it is the most secure part of our infrastructure,” Kaliaropoulos says. “And in the Ryde co-location centres we didn’t have the high levels of diversity required, or the highest level of security.”
Optus e-business & hosting general manager Noel Hamill says the changes were made partly as a response to the new emphasis on location evident among customers since September 11, 2001. “What we discovered was that physical location entered into the equation where it wasn’t such a driver before. That was a contributing impetus for the decision to locate into Sunshine in Melbourne because that’s probably 30 minutes’ drive from the city centre. So it gives that physical diversity, and of course industrial parks tend not to be targets for terrorist activities compared to landmark buildings,” Hamill says.
But if one lesson to emerge from the September 11 attacks was the vulnerability of data centres in metropolitan locations, Hamill says Optus has also learned in the four years it has been in the hosting game that connectivity is an even more important factor. “We’re running a strategy where we’re putting our data centres on top of our exchanges, and the exchange is obviously the telecommunications infrastructure, which provides the data connectivity to do the Internet out to other companies.”
He claims the Optus exchange meets “incredibly high standards” of security, diversity and redundancy. For example, there are multiple loops over the fibre core into the Sunshine exchange to protect against damage to cables. “You also need to consider bandwidth, bandwidth and bandwidth,” Hamill says. “Most of the applications that are being hosted in these facilities require large transfers of data.”
With physical proximity to the client organisation no longer a consideration thanks to the Internet, ePrint Web Hosting is providing a service to Australian businesses that involves their data being co-located in Houston and San Antonio, Texas in the US (well outside the CBD). Managing director Andrew Hennell denies location was a major factor in the decision. Rather, he says, the company chose the site because it provides military-grade security, and because of the number of networks and the bandwidth that comes into the facility.
“Pretty much if you’re hosting with Optus you’re with Optus, if you’re with Telstra you’re with Telstra,” Hennell says. “Whereas we’ve found that the peering networks and so on in the US and the number of networks connected to this data centre give us a degree of connectivity that we can’t find within Australia.”
Hennell says ePrint is now in the process of closing its Sydney CBD data centres because the benefits it enjoys in hosting in the US far outweigh any advantages in maintaining an Australian presence. But he insists if prices and connectivity were equivalent he would not be opposed to a Sydney CBD location, despite any fears of a September 11-type attack on these shores.
“If you look at something like September 11, where both buildings of the World Trade Centre were taken out and there was a lot of damage done, on the greater scale of Manhattan Island, it was only a very small percentage that was hit. So there are possibly thousands of smaller data centres across New York, across Manhattan Island, that were functioning fine,” he says. “Yes, telecommunications and so on were hit hard on the island and there were issues with that as well, but that could also be the case if you were in a suburban location.”
Hennell has another counter to the wisdom that organisations should move their data centres out of the CBD post haste, pointing out that one of the problems with suburban locations in Australia is limited access to telecommunications services. “Sydney CBD, North Sydney, Chatswood, St Leonards — in that area you have got plenty of accessibility, and so within that footprint you’re fine,” Hennell says. “But if you start moving even out to Parramatta or smaller areas within the city, then the access to that level of IT infrastructure is greatly diminished, or becomes a lot more expensive to provision.”
On the other hand, RACV CIO Charles Burgess says he has no worries about housing his mid-range data at IBM’s Baulkham Hills, Sydney data centre. “My personal feeling about disaster recovery centres is if there is an event so large it is going to take out a whole disaster recovery centre somewhere in a CBD, then all of us are going to be worrying about things other than whether our IT systems are working. We’ll be looking for the next meal instead,” says Burgess. “I am more concerned when I look at disaster recovery centres as to who else is in them, and how many people per system are being supported.
“What you don’t want is customers that are in the same geographic location as yourself, at the same data centre, because if we have a major problem out at Noble Park, I don’t want to be in the same data centre as everyone else at Noble Park,” he says.
Thinking About Disaster
A little planning goes a long way
In a previous role in another company, and long before September 11, 2001, former Zurich Financial Services Australia CIO Peter Delprado (now head of investment management and life) simulated a 747 hitting the corporate data centre. The results made clear that aside from the putative damage to the data centre itself, the effect on personnel was one of the biggest concerns to be addressed. (The scenario found you had to assume 50 per cent of your staff would be lost in the disaster.)
But when Delprado oversaw the company’s move from a data centre in St Leonards to one in Chatswood earlier this year, the fact that the old centre was located towards the top of a high-rise building and hence was vulnerable to a September 11-style attack was only one of the weaknesses that worried him. Just as much a risk, in Delprado’s eyes, was the restaurant kitchen underneath, which he considered posed an extreme fire hazard to Zurich’s critical data.
Additionally, there was no redundant airconditioning and only one telecommunications cable out of the building via Telstra. The possibility of an effective disaster recovery (DR) strategy was made more difficult because the storage infrastructure was siloed, there were too many single points of failure and the existing building was not suitable for bringing new storage online easily.
“Now we have found a new location, where the machinery itself is on two levels below ground so even in the event that the building is hurt or hit, unless the entire building collapses, it still has a reasonable chance of it surviving,” Delprado says.
The purpose-build centre could easily withstand efforts to ram the building, and has redundant cooling including both air and water coolers to deal with any loss of water to the building. The site also came with an uninterrupted power supply with twin generators and a gas fire suppression system, and enough diesel generators to cope with up to a month without power.
“The new site has all but removed any physical risk to our data centre while delivering significant cost savings. In rough terms, we estimate our new data centre to be saving us around $1 million a year,” Delprado says.
For Disaster Recovery, Put Your IT Eggs in Different Basketsby James Cope
Four IT approaches that companies use to quickly recover when disaster strikes
The September 11 terrorist attacks fundamentally changed the way US CIOs think about disaster recovery.
“It’s no longer a matter of planning what to do should fire or flooding prevent access to buildings,” says Bob Fucito, vice president of crisis management and business continuity at investment banking firm BNP Paribas. Today, businesses have to prepare for the ultimate security risk: what to do when people and buildings are intentionally targeted and destroyed.
Fucito should know. His duties include managing disaster recovery for Paris-based BNP Paribas’ North American operations. And he says he’s thankful that his company’s executives supported the creation of a disaster recovery plan that emphasises distribution of IT resources — two years before the September 11 attacks. The company had to evacuate its New York City building after the attacks, but Fucito says having two separate data centres and a contract with a hot-site recovery provider put BNP Paribas in a better position to continue doing business.
BNP Paribas isn’t alone in thinking that having IT resources in one building or on a single network isn’t a good idea. Other major organisations, such as Boeing, United Air Lines, the Chicago Board of Trade and the US Postal Service, try to mitigate the risk to IT resources by distributing data, applications and network infrastructure. They also have redundant communications links at the ready.
All of those organisations have the same goal: to quickly recover or even seamlessly continue doing business when disaster strikes. But they have different ways to accomplish it. Here are four approaches that major companies are using to stay prepared.
1. Redundancy and multiple routes:UAL Loyalty Services, an online customer service unit of United Air Lines parent UAL Corporation, is installing duplicate systems at two company-owned and -operated data centres. Both are in the Chicago area, says Igor Rafalovsky, director of networking and security, but the facilities are geographically separated.
A metropolitan-area network capable of gigabit speeds, known as a GigaMAN, connects the two centres, Rafalovsky says. Moreover, each data centre is connected over T3 lines running to separate Private Network Access Points (P-NAP), which are Internet backbone connection points owned and operated by Internap Network Services in Seattle.
And even at the P-NAPs, traffic going to and from the two UAL data centres runs across multiple Internet backbones from different providers, such as Sprint, WorldCom and others. A P-NAP may have up to six or eight backbone providers online and available at any given time.
Both UAL data centres host Web servers, applications and databases. Disk storage is synchronised in real time over the GigaMAN, and both data centres are online all the time. “In the case of a catastrophic failure of one data centre, the other one just picks up the traffic, in many cases without interruption . . . or manual intervention,” Rafalovsky says.
2. Outsourced hot sites: When BNP Paribas IT employees evacuated their building in New York in response to the terrorist attacks, they moved to the company’s other data centre in New Jersey to continue operations. Even so, Fucito says his firm also has a contract with New York-based SchlumbergerSema to provide off-site hot sites.
Hot sites duplicate the mission-critical parts of a company’s IT systems in secure buildings miles away from the primary sites. IT workers can go to hot sites to initiate recovery or simply resume work.
John Kersley, SchlumbergerSema’s vice president of business recovery, describes how it works: A corporate customer configures its own data centres to automatically mirror data and applications to the appropriate hot-site recovery centre (or centres). That company’s IT employees are assigned physical positions (desks and workstations) at a specific centre and instructed on how to get there if there’s a crisis. When the company’s workers are in place at the recovery centre, it becomes a matter of patching the data through to the off-site desktops.
Hot sites are especially appealing to financial services organisations like BNP Paribas and the Board of Trade Clearing, the clearinghouse for the Chicago Board of Trade, which has a hot-site contract with SunGard Data Systems.
The concept also has value for major retailers. For example, Leeds, England-based ASDA Group Ltd — a chain of food and clothing superstores owned by Wal-Mart Stores — has an agreement with SchlumbergerSema to send select members of its IT staff to a global business recovery centre if a disaster closes ASDA’s own IT facilities.
3. Blend of internal and external redundancy: SunGard and SchlumbergerSema say the trend is toward using hot sites for disaster recovery. But Damian Walch, vice president of consulting at T-Systems, sees the trend heading in the opposite direction.
“Companies are looking at internalising their disaster recovery systems and moving away from hot-site providers,” Walch says. However, he acknowledges that the hot-site idea won’t go away any time soon and that disaster recovery strategies often involve a blend of approaches.
In fact, extremely large and diverse organisations, particularly those using mainframes in addition to PC servers, foster redundancy through a mix of multiple in-house data centres and mirrored hot sites.
Chicago-based Boeing, for example, has to consider the specific needs of business units and the communication challenges that come with having a multitude of far-flung locations.
“Distributed hot-site contracts tend to be more expensive with mainframe environments. We try to consolidate and centralise IT but also avoid the risk of too many megacenters . . . by having geographic separation [of IT facilities],” says Steve Guzek, Boeing’s program manager for disaster recovery.
Guzek maintains that focusing on networks is the key to eliminating single points of failure.
4. Satellite backup: Bob Otto, vice president of IT at the US Postal Service (USPS) in Washington, says he could see the smoke from his office after the aircraft struck the Pentagon on September 11. “We then evacuated our computer centre of our Washington facility and set it up for remote management from our Raleigh [NC] disaster centre and immediately instructed our data centres in California and Minnesota to begin backing up to Raleigh,” Otto says.
Then Otto’s group learned that the New York attacks had knocked out the frame-relay links connecting facilities in New York to the postal service’s wide area network. So the USPS pointed its VSAT satellite system towards New York, and the city’s post offices were almost immediately back on the network.
It was all part of the plan, says Larry Wills, manager of distributed computing for the USPS. While frame-relay land lines are the primary network connection to thousands of post offices across the US, the USPS has 11,000 VSAT installations nationwide, Wills says. The VSAT services are provided by SpaceNet.
Generally, the switch-over is automatic: When frame relay goes down, a satellite connection takes over. Wills says post offices generally don’t even know when it has happened.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.