CIO

Vendor View: Three steps to lowering IT costs

Roger Mannett, Marketing Director for NetApp in Australia and New Zealand, advises CIOs on how to do more with less -- without compromising business outcomes.

Storage is the number two budget item in data centre infrastructure, and will therefore be on the radar of cost cutting executives. In order to prepare for almost inevitable budget cuts or freezes, CIOs must take pre-emptive measures to reduce storage costs while ensuring they have the infrastructure required to support the organisation’s return-to-growth strategy.

According to a recent Gartner report, IT departments should be focused on cost optimisation programs rather than cost cutting exercises. This means looking at ways of saving money through process improvement and better integration of IT into the business in a way that will enable innovation and future growth. Breakthroughs in optimisation will also include deploying new technology to permanently reset your cost structure at a lower level.

There are three key steps CIOs should take when determining how best to cut costs. First, determine which projects can be cancelled outright and then which projects can be deferred until economic times improve. This is also the time to look at ways of optimising vendors to reduce costs. For the projects you want to champion, it is vital to identify and remove operational redundancies to get more value from existing storage expenditures while deploying new technologies to maximise data storage in the least amount of space.

1. Cancel or defer projects

The first step when assessing how to reduce costs is to carefully scrutinise any planned projects and determine if they meet the organisation’s current business needs.

Decide if the project provides a quick and tangible return on investment. In the current economic climate, hard costs savings that can be achieved within a budget cycle of a quarter or a year will be much more persuasive than those that provide ROI across a longer time frame. Projects that result in cost avoidance, such as deferring the need to build an expensive new data centre, are at the top of the approval list.

The total cost of ownership should be assessed for the life of the product or solution, rather than just the purchase price. The project should also be reviewed to see if it supports the overall short- and long-term business goals and fits within new IT budget realities.

Only when these questions have been answered can you determine a project’s viability. If the project doesn’t meet the criteria, it should be cancelled.

You may find that upon assessment of future projects, many meet the majority of the criteria necessary for approval, but still do not fit within current budget realities. These projects should be put on hold until budgetary conditions improve.

Page Break

2. Optimise vendors

Most organisations use the products and services of multiple vendors over time. This can often lead to legacy and interoperability issues. For example, the storage environments of many organisations include products from a variety of vendors which are unable to interact with each other and require different management tools for each. Unfortunately, this complexity can even exist across product lines within the same vendor’s offering.

Not only does this introduce complexity into the environment, it can also be the source of wasted budget. Staff needs to be trained to manage multiple protocols and multiple-vendor environments. Technologies purchased to improve storage efficiency may only work across a portion of one vendor’s equipment and not that of another vendor.

CIOs need to look at ways to optimise multi-vendor assets, or increase their Return On Assets (ROA, not ROI). Where storage is concerned, management products such as NetApp’s V-series, provide advanced storage efficiency, data protection and data management features across legacy storage equipment from multiple vendors. This eliminates the need to purchase new storage assets and reduces complexity within the data centre.

3. Assess your IT environment for efficiency

In order to improve efficiency, enable consolidation and increase productivity, it is first necessary to assess your current IT environment to determine areas of redundancy. Processes can be improved to limit complexity. Tasks currently undertaken manually may be automated and staff members redeployed to more strategic roles. Existing technology may be better utilised and new technologies can be adopted to improve efficiency and cut costs.

For example, in the majority of organisations, storage is highly underutilised. On average, most organisations are only using between 25 and 40 per cent of their storage assets. This means that between 60 to 75 per cent of storage space is wasted. If this is the case, not only will the business be purchasing more storage when it is not really necessary; the unused storage is also draining resources such as power and staff management time, without adding any value.

It is possible to reduce the amount of storage required and provide higher utilisation rates for existing assets by deploying storage efficiency technologies such as data deduplication, virtualisation, thin provisioning and thin cloning, caching technologies and SATA drives. These technologies can provide immediate hard costs savings in the form of reduced power and cooling requirements, negating the need to purchase more storage or build additional data centres.

Storage efficiency technologies can also provide soft cost savings such as increased staff productivity, faster time–to-market and improved business agility.

For example, efficiencies may be gained by rethinking an organisation’s application development and testing environments. Application development teams require storage resources, but we need to challenge the underlying assumptions about the size and provisioning of these resources. Rather than saving multiple physical copies during development and testing, new cloning technology allows developers to use virtual copies of data sets. This means the production environment can be duplicated and modified without adding any physical storage, resulting in fewer disks that need to be purchased, provisioned, powered and managed.

Asking the right questions can help discover other hidden costs that can result in big savings and cost avoidance. The last example of application development and test environments has some of these hidden cost opportunities. One suggestion is to review your assigned infrastructure to see whether servers and storage are running on dual power supplies. If so, they are consuming nearly double the power necessary to provide redundancy that may be unnecessary for this function. By simply unplugging the second power supply you can achieve power cost savings, avoid adding to your UPS system load, and keep your data centre footprint from increasing.

Page Break

Conclusion

Now is the time to take a long, hard look at your IT environment. It is possible to do more with less, without compromising on business outcomes. Examine closely the areas that can be cut or deferred; scrutinise your vendor solutions and see who works most effectively with others; and assess your storage environment, as it’s an area where significant cost savings and efficiency gains can be made.

Combining these tactics will help to ensure your organisation’s IT infrastructure will be ready and available to allow the business to grow and innovate when economic conditions improve.

Sidebar: Storage Efficiency Technologies

Dual-parity RAID & Caching technology: allows the use of higher density, lower cost SATA drives by providing production resiliency and performance in a cost effective way. Dual-parity RAID safeguards data from double-disk failure without impacting performance. Coupling caching technology like NetApp’s PAM card with cheaper SATA drives can improve their performance to Fibre Channel levels. These two technologies enable cost savings without compromising performance or data protection.

Data deduplication: is the automated process of removing duplicate files from volumes of storage. Data duplication assigns a unique signature to every block of data in a given volume. If more than one data block with the same signature is discovered elsewhere in the volume, all other but the original are deleted and replaced with the smaller footprint of a reference/pointer to the original. Three of the most common applications ripe for deduplication are within virtual machines, on user file servers and within back-up volumes. At its most effective, data deduplication can enable organisations to reclaim up to 95 per cent of storage space.

Thin provisioning: eliminates the common practice of over-provisioning storage based on anticipated future needs. Instead it provides ‘just-in-time’ provisioning – the flexible allocation of required storage to an application on-the-fly from a consolidated networked storage pool. A larger pool of unused storage enables administrators to defer the purchase of new hardware or even decommission older arrays. This can both reduce storage costs directly and yield the ‘soft savings’ from reductions in data centre space and associated power and cooling requirements.

Unified Storage: all use cases for storage (production; development and testing; back-up and recovery; disaster recovery, and archive and compliance) and all common storage protocols (such as Fibre Channel, iSCSI and NFS) are incorporated into a single, unified operating platform. This enables massive consolidation opportunities and significantly simplifies data management and staff training time. When combined with the storage efficiency features listed above, a Unified Storage deployment provides the greatest opportunity for storage efficiency.



Roger Mannett is Marketing Director for NetApp in Australia and New Zealand. In this role, Roger is responsible for all aspects of NetApp’s marketing program in the region, encompassing the direct sales and channels environments, corporate branding, product management, media and analyst relations as well as corporate events. For more information visit www.netapp.com.