Menu
Hadoop: How open source can whittle Big Data down to size

Hadoop: How open source can whittle Big Data down to size

Techworld Australia caught up with Doug Cutting to talk about Apache Hadoop, a software framework he created for processing massive amounts of data

In 2011 ’Big Data’ was, next to ‘Cloud’, the most dropped buzzword of the year. In 2012 Big Data is set to become a serious issue that many IT organisations across the public and private sectors will need to come to grips with.

The challenge essentially comes down to this: How do you store the massive amounts of often-unstructured data generated by end users and then transform it into meaningful, useful information?

One tool that enterprises have turned to to help with this is Hadoop, an open source framework for the distributed processing of large amounts of data.

Hadoop lets organisations "analyse much greater amounts of information than they could previously," says its creator, Doug Cutting. "Hadoop was developed out of the technologies that search engines use to analyse the entire Web. Now it’s being used in lots of other places."

Dries' vision for Drupal 8

The road to a successful open source project: Learning lessons from Drupal

Open Source Ecology: Can open source save the planet?

Python vs. PHP: Choosing your next project's language

In January this year Hadoop finally hit version 1.0. The software is now developed under the aegis of the Apache Software Foundation.

"The releases coming this year will effectively become Hadoop 2.0," Cutting says. "We're going to see enhanced performance, high-availability and an increased variety of distributed computing metaphors to better support more applications. Hadoop's becoming the kernel of a distributed operating system for Big Data."

Hadoop grew out of Nutch, a project to build an open source search engine Cutting was involved in. Development of Nutch is also conducted under the patronage of the Apache Software Foundation.

"The Hadoop ecosystem now has more than a dozen projects around it," says Cutting. "This is a testament to the utility of the technology and its open source development model. Folks find it useful from the start. Then they want to enhance it, building new systems on top.

"Apache's community-based approach to software development lets users productively collaborate with other companies to build technologies from which they can all profitably share."

Hadoop setups are available from big names in the Cloud computing space, including Amazon (through Amazon Elastic MapReduce) and IBM; in December Microsoft announced a "limited preview" of Hadoop on its Windows Azure Cloud service. Hortonworks, a company set up by Yahoo (which runs a 42,000-node Hadoop environment and is a key driver of the project), and Cloudera, which employs Cutting as chief architect, also offer Hadoop-related services.

Cloudera offers a distribution of Big Data software called CDH — Cloudera's Distribution Including Apache Hadoop. "This is open-source, Apache licensed software," Cutting says. "Folks can develop their applications against these APIs without fear of ever being locked into paying any one vendor.

The company sells support and a licence to its proprietary software, Cloudera Manager, which helps deploy and monitor CDH. The Oracle Big Data Appliance, released in January, runs CDH.

"Appliances are a great way to get a customer in the door, but most folks end up buying a customised cluster," Cutting says. "Some folks may find the appliance itself to be the right solution, but more frequently people want something that's more suited to their particular uses.

"Folks tend to start with a small proof-of-concept system, perhaps 10 or 20 nodes. Once they've gained some experience with this then they have an idea of both how big their production system needs to be and what its bottlenecks are. This informs the balance of storage, compute, memory and networking that will serve them best.

"Over time, as workloads evolve and grow, folks may gravitate towards common configurations, but we're not yet seeing a lot of one-size-fits-all solutions."

Cutting says when he started Hadoop, which was named after his son's toy elephant, he didn't realise just how significant the project would end up being. "I thought it would probably be useful to lots of folks, but I didn't think much about how many or how they might use it," Cutting says. "I certainly didn't think that it would become the central component of a new paradigm for enterprise data computing.

However, the software is "ultimately the product of a community," he adds. "I contributed the name and parts of the software and am proud of these contributions. The Apache Software Foundation has been a wonderful home for my work over the past decade, and I am pleased to be able to help sustain it."

Cutting uses the example of a hypothetical large retailer to explain what Hadoop can do with an enterprise's data: "Instead of just being able to analyse national sales over the past month, it can with Hadoop analyse sales trends over many years. This lets them better manage pricing, inventory and other core aspects of their business: They get a higher resolution picture of their business.

"Similarly, credit card companies can better guess whether a transaction is fraudulent, banks can better guess whether someone is credit worthy, oil companies can better guess where to drill, and so on. In nearly every case they can use data they were formerly discarding to improve the quality and profitability of their products."

Cutting predicts continued exponential growth in Big Data analytics. "We're still in the steep part of the adoption curve and will be for at least a few more years," he says.

"It will be a while before growth merely tracks that of the larger economy. Developing economies like China and India will fuel continued growth in this space."

In the government sphere, adoption of Big Data technologies has been mixed, Cutting says: Intelligence communities have been early adopters, but other parts of government may not have even begun grappling with it.

"Even folks who are already using these technologies will continue to expand their use for years, incorporating data from new sources and finding new applications," he adds. "We're still at an early stage of the adoption curve.

"Most industries are currently dipping their toes into Big Data. The ones to watch are the industries we expect to grow the most. For example, healthcare and telecom create huge amounts of data that's not yet used as effectively as it could be."

Follow Rohan Pearce on Twitter: @rohan_p

Follow Techworld Australia on Twitter: @techworld_au

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Amazon Web ServicesApacheApache Software FoundationHewlett-Packard AustraliaHPIBM AustraliaIBM AustraliaMicrosoftOracleYahoo

Show Comments
[]