Menu
In-memory computing

In-memory computing

In-memory computing is making its way out of R&D labs and into the enterprise, enabling real-time processing and intelligence

The massive explosion in data volumes collected by many organisations has brought with it an accompanying headache in terms of putting it to gainful use.

Businesses increasingly need to make quick decisions, and pressure is mounting on IT departments to provide solutions that deliver quality data much faster than has been possible before. The days of trapping information in a data warehouse for retrospective analysis are fading in favour of event-driven systems that can provide data and enable decisions in real time.

Indeed, real-time computing is a new catch cry across the technology industry. A hypothetical example is a retailer that can monitor a customer’s real-time behaviour in store or on a website and draw on historical data from their loyalty system regarding spending patterns to make them offers that they might respond to in that moment.

Such a scenario has long been dreamed of, but it is being made possible today for retailers and other industries thanks in part to a technology known as in-memory computing.

In-memory computing works by bringing data physically closer to the central processing unit.

Chip manufacturers have been on this path for some time with the integration of Level 1 and Level 2 caching into microprocessors, as moving indices into Level 1 cache makes them more quickly accessible. Moving out through the caching levels usually results in a loss of speed but an increase in the size of data storage.

In-memory technology follows the same principals and moves data off disks and into main memory, eliminating the need to run a disk-seek operation each time a data look-up is performed, significantly boosting performance.

The idea of running databases in memory is nothing new, and was one of the foundations of the business intelligence product QlikView, released by QlikTech way back in 1997. More recently other technology companies have jumped on the bandwagon, notably SAP and TIBCO. What is making in-memory so popular now is that plunging memory prices have made it economical for a wider range of applications.

Gartner’s managing vice-president for business intelligence and data management, Ian Bertram, says the key application for in-memory technology today remains business intelligence, where it enables data to be conducted on the fly, with accompanying faster refreshing.

“It’s creating a data warehouse in-memory, which is a much faster technology than doing stuff on disk,” Bertram says. “Disk clearly has a much greater capacity, but it is slower. In memory, it is instantaneous. “The value comes in for people who have to make decisions really quickly, as they don’t have to wait 12 hours for an OLAP [online analytical processing] cube to be built anymore.”

As a pioneer of in-memory technology, QlikView is used by a range of Australian companies for business intelligence, including the packaging company Amcor, which is using it to make better decisions on the profitability of its delivery operations.

QlikTech’s senior vice president of products, Anthony Deighton, claims the in-memory architecture has enabled his company to build a product that focuses on ease of use.

“When users interact with our product, they can click and find their own path through the data, and do that interactively and at high speed,” he says.

The marketplace for in-memory products is rapidly becoming crowded. SAP co-founder, Hasso Plattner, has been driving the development of in-memory computing through the eponymous Institute that he founded at the University of Potsdam in 1998. Its efforts were first unveiled at SAP’s annual SAPPHIRE conference in 2009. This year, SAP announced the High-performance Analytical Appliance (HANA) project as a roadmap for in-memory computing.

About 20 companies were invited to join up to the trial, but more than 40 have now done so, and SAP will be selling in-memory appliances by the end of 2010.

SAP Australia and New Zealand products and solutions group director, John Goldrick, says that in-memory data look-up times are further reduced through the data being stored in a column format (as is the case with SAP’s Sybase IQ database) rather than rows, which means that for many operations only the contents of one column need to be read, not the entire table.

As each column is comprised of records of the same data type and size, the databases can be efficiently compressed. According to Goldrick, about 40 per cent of data storage is no longer required, along with about 80 per cent of data look-up activity. In one instance, he says, a 1.8TB database was compressed down to just 70GB.

“All of a sudden you are moving down to being able to use raw data faster than you could ever use the aggregated data, so the whole processing time becomes faster,” Goldrick says. “We did a study and worked out that we could hold the entire system of all but four of our largest customers on one blade, and get the processing speed to be 10,000 times faster than it currently is from disk.”

Most in-memory BI systems today draw data from of an existing source such as a data warehouse. Reporting tools can then be pointed at the in-memory database to generate reports.

Transactional reporting

The next step is to move transactional systems to in-memory, where it is possible to update and analyse transaction information in real time. In June, TIBCO released a series of new in-memory products for high-speed data management, the ActiveSpaces Suite, to provide fast shared memory for distributed applications to more quickly exchange and process real-time data.

Chairman and chief executive, Vivek Ranadivé, says it has immediate application in fields where there are high numbers of transactions, such as airlines, banking or utility smart grids.

“You can analyse things as they happen in your transaction system in real time,” Ranadivé says. “There is no extracting and translating the data into your data warehouse to run your analytics.”

His vision is to enable corporations to evolve from being transaction-driven business to become event-driven, where events within the organisation can trigger actions in real time based on existing data; a process he describes this as the ‘two-second advantage’.

“The two-second advantage is about having a little bit of the right information in the right context just a little beforehand — whether it is two seconds, two hours or even two days,” says Ranadivé. “By sensing what is happening around them, businesses can constantly adjust and react a little ahead of the competition or in anticipation of a customer’s wants and needs.”

Underlying this is in-memory architecture, which he says offers fast, shared memory to accelerate the exchange and processing of real-time data and events. Hence, in-memory technology is generally spoken of in the same breath as the broader movement towards real-time information processing systems.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags transactionsbusiness intelligenceSAPtibcoin-memory processingQlikViewin-memory computing

More about AmcorGartnerSAP AustraliaSybase Australia

Show Comments
[]