Menu
4 Tech Innovations That Improve Data Center Scalability

4 Tech Innovations That Improve Data Center Scalability

Growth is normally a boon for any business. Servers hum faster when an ecommerce site attracts more customers (and more credit card transactions). When storage requirements for a new business that handles documentation for large companies suddenly escalate, executives high-five each other.

Scaling can be so costly, though, that fast growth isn't always a positive. Fortunately, new technologies can help a company ramp up quickly and efficiently, removing some of the pain of having to expand a data center. Instead of being faced with a major capital outlay that offsets new revenue, these innovations make the impact of scaling up a data center to meet demand less of a drain.

1. Modular Data Center Additions But Not Modules

Most large companies know they can turn to Microsoft, HP and others to purchase entire data center modules. They provide a way to scale quickly, but usually a high cost.

Analysis: Public Sector CIOs Face Data Center Consolidation Roadblocks

Tate Cantrell, the CTO for British-based Verne Global, has a data center in Iceland that uses a modular approach - but the company doesn't have to add a module with hundreds of servers.

A new addition might consist of multiple racks and as few as 30, assembled remotely so that the cooling systems, power, cabinets and servers are all ready to go by the time they are installed. (The company can also install racks with up to 10,000 servers for larger companies.)

[Related: It's Twilight for Small In-House Data Centers]

One key component: A power busway that provides greater flexibility for adding modules. Major electrical manufactures, including Siemens, Snyder, PDI and Universal Electric, offer busway solutions, Cantrell says. Verne Global standardized on Universal Electric Starline infrastructure for its modular data centers, he says, given the company's track record and its monitoring options, which let the company provide "very granular feedback" to customers.

2. Power Enterprise Pools: 'Elastic Capacity'

One challenge in scaling a data center is knowing when to invest in servers and how many to add. There are often spikes in demand, but it's difficult to predict when they will occur - and what to do when you don't need the extra capacity.

One answer: Enterprise Pools, a scaling infrastructure from IBM that works with the IBM Power servers. "Demands for data application is driving clients to look at continuous availability," says Steve Sibley, the director of IBM Power Systems. "When new apps roll out, they need to be able to scale rapidly or recover IT resources and scale down. There needs to be an elastic capability similar to the cloud and a way to not overpay for capabilities."

Related: How to Evaluate High Availability Options for Virtualized IT EnvironmentsCommentary: The Dangers of Disconnected Data

The idea of managing down to the processor level isn't a new concept. What is new is that data centers can add move, and remove virtual processors and memory for spikes in usage or maintenance. They don't have to pay for extra capacity but only pay for the servers they need. They also pay only for a portion of the full cost of processors and memory up front. IBM estimates the cost of these pools is $0.67 per hour, based on per-day costs for processor and memory allocations. Data center operators can manually adjust the service levels for an application as often as they want, then use those service levels for automation.

3. Object Storage: No More Playing With Blocks

When it comes to data center scale, traditional file storage systems can be limiting. Think of an upstart social network. When there are a few hundred users, the storage system can keep up with the number of images and video posted online. Scaling to a few million users suddenly becomes a management chore - data center managers have to manage multiple volumes.

[Related: Explosion in "Big Data" Causing Data Center Crunch]

"File systems are designed for people to collaborate on the same data without modifying it at the same time," says Tom Leyden, a spokesman for DataDirect Networks. "If two people access a Word document at the same time, they will lock the file. Those locking mechanisms make it complex to scale the file system. A file system is slow when it's locked."

Related: Copying Grows Along With Data, Driving Attempts to Rein It in

The answer, says Leyden, is object storage. The idea is to use a simplified ID system for files. The ID crosses multiple storage volumes and refers to where that object is stored. Metadata is also attached to the file to make it more searchable across volumes. There's no hierarchy and no locking mechanism, says Leyden. This helps with scaling because object storage can create "clusters" of data that scale as a company grows. Object storage creates a single storage management system - one that's easier to manage.

4. Auto-tiering: Scale Up, Scale Down

Data center managers need to automatically adjust storage as application needs change. The goal is to accommodate high-performance apps, but the challenge is knowing when to scale up for demand and then when to scale down.

Auto-tiering analyzes actual app data frequency of use. In an infrastructure that uses Dell EqualLogic arrays, for example, 80 percent of data becomes inactive after a month. Auto-tiering matches this legacy data for the lowest-cost storage option, rather than keeping it on faster drives too long.

Analysis: SSDs Still Maturing; New Memory Tech Still 10 Years AwayForrester: Skip Data Tiering, Go Directly to All-SSD Storage

One of the most recent changes is how auto-tiering uses storage to take advantage of flash speed boosts. "Rather than using spinning disk drives as the latest greatest drive technology, which are mechanical and cause heat and vibration, we use the latest class of solid-state drives," says Bob Fine, director of storage product management at Dell. "The new technology leverages solid state drives - data coming in that's performance oriented, we place on solid state."

[Related: How Cloud Computing Is Changing Data Center Designs and Costs]

The innovation: The auto-tiering uses only a small amount of flash for the high-performance apps. Dell auto-tiering can also distinguish between higher cost single-level cell (SLC) flash and slower, higher capacity multiple-level cell (MLC). These kinds of smart adjustments occur automatically and reduce the cost of using flash.

John Brandon is a former IT manager at a Fortune 100 company who now writes about technology. He has written more than 2,500 articles in the past 10 years. You can follow him on Twitter @jmbrandonbb. Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.

Read more about data center in CIO's Data Center Drilldown.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags MicrosoftstorageserverData Centerhardware systemsTechnology TopicsConfiguration / maintenanceTechnology Topics | Data CenterAuto-tieringdata center scalabilityobject storageModular Data Center

More about BuiltDataDirect NetworksDellEqualLogicFacebookGoogleHPIBM AustraliaMicrosoftMLCModularPDIRMSSiemens

Show Comments
[]