Amazon Elastic MapReduce
- 11 May, 2009 09:00
Have you got a few hundred gigabytes of data that need processing? Perhaps a dump of radio telescope data that could use some combing through by a squad of processors running Fourier transforms? Or maybe you're convinced some statistical analysis will reveal a pattern hidden in several years of stock market information? Unfortunately, you don't happen to have a grid of distributed processors to run your application, much less the time to construct a parallel processing infrastructure.
Well, cheer up: Amazon has added Elastic MapReduce to its growing list of cloud-based Web services. Currently in beta, Elastic MapReduce uses Amazon's Elastic Compute Cloud (EC2) and Simple Storage Service (S3) to implement a virtualized distributed processing system based on Apache Hadoop.
Hadoop's internal architecture is the MapReduce framework. The mechanics of MapReduce are well documented in a paper by J. Dean and S. Ghemawat [PDF], and a full treatment is beyond the scope of this article. Instead, I'll illustrate by example.
Suppose you have a set of 10 words and you want to count the number of times those words appear in a collection of e-books. Your input data is a set of key/value pairs, the value being a line of text from one of the books and the key being the concatenation of the book's name and the line's number. This set might comprise a few megabytes big -- or gigabytes. MapReduce doesn't much care about size.
You write a routine that reads this input, a pair at a time, and produces another key/value pair as output. The output key is a word (from the original set of 10) and the associated value is the number of times that word appears in the line. (Zero values are not emitted.) This routine is the map part of map/reduce. Its output is referred to as the intermediate key/value pairs.
The intermediate key/value pairs are fed to another function (another "step" in the parlance of MapReduce). For this step, you write a routine that iterates through the intermediate data, sums up the values, and returns a single pair whose key is the word and whose value is the grand total. You don't have to worry about grouping the results of like keys (i.e., gathering all the intermediate key/values for a given word), because Hadoop does that grouping for you in the background.
These two steps, the map function and the reduce function, comprise what Amazon MapReduce refers to as a "job flow." Admittedly, this is an oversimplification, because job flows involve other configuration parameters (such as where you get the input data and where you put the output), and you can define additional steps in the process, but that's the basic idea.
As a result, a programmer building a Hadoop-powered MapReduce system can focus on the comparatively simple job of crafting the individual functions that process single key/value pairs at a time. Hadoop does the legwork of carving the input data into initial key/value pairs; starting multiple map function instances; feeding them input data; gathering, sorting, and ordering the intermediate key/value pairs; launching reduce instances; feeding them the properly arranged intermediate data; and -- finally -- delivering the output. And all the while, Hadoop monitors the progress map and reduce tasks, as well as restarts "dead" ones automatically. Whuf.
Hadoop in the cloud
To access Amazon's Elastic MapReduce, your first stop is your Amazon Web Services account page (assuming you have an account with AWS), where you must sign up for the Elastic MapReduce service. Then, head on over to the AWS Management Console and log in. You'll find that the AWS Console -- which had been a control panel for Amazon's EC2 only -- displays a new Amazon Elastic MapReduce tab. Click the tab, and you are transferred to the Job Flows page, from which you can monitor the status of current job flows, as well as examine details of previous (terminated) job flows.
To define a new job flow, click the Create New Job Flow button. This sends you through a series of windows in step-by-step fashion. You fill in textboxes to define the location of your input data, where you want your output data, and paths to your map and reduce function. All of these locations must exist in Amazon S3 buckets. In the case of the output data, the location will exist when the job flow concludes. Consequently, it's a good idea to have a utility for transferring data to and from S3 on hand. I recommend the excellent S3Fox Organizer.
Amazon Elastic MapReduce allows for two kinds of job flows: custom jar and streaming. A custom jar-style job flow expects your map and reduce functions to be in compiled Java classes stored in Java JAR files. The Hadoop framework is Java-based, so a custom jar job flow provides the better performance. On the other hand, a streaming-type job flow lets you write your map and reduce functions in non-Java languages such as Python, Ruby, Perl, and others. The functions of a streaming job flow read the input data from stdin, and send the output to stdout. So, data flows in and out of the functions as strings, and -- by convention -- a tab separates the key and value of each input line. Once you've specified the whereabouts of your job flow's components, you identify the quantity and processing power of the EC2 instances on which the job will execute. You can select up to 20 EC2 instances; any more than that, and you have to fill out a special request form. Your choice of compute instances ranges from Small to Extra Large High CPU. Check the Amazon documentation for a complete description of the power of a CPU instance.
A review step follows. Once you approve your configuration, your job is launched, and you return to the Job Flows page where the job's progress is monitored. When the job completes, your output data will be stored in the S3 bucket you specified.
Users repelled by Web-based graphical management consoles (such as the AWS Management Console) will be happy to discover that Elastic MapReduce can be powered by a command-line interface. This interface executes in the Ruby programming language (a free download) and provides a single command that sports a battalion of parameters. You can create job flows, define inputs, specify map and reduce functions, and generally do anything covered in the AWS management console.
Personal distributed computing
Setting up an Amazon Elastic MapReduce job flow is remarkably easy. New users should run one of the supplied example applications to familiarize themselves with the complete process. I would also recommend setting the optional parameter for generating log files. The resulting logs are comprehensive and can be confusing if you're new to Hadoop, but they helped me track down repeated failures in my first attempts.
Amazon claims to have tweaked the behavior of its implementation of Hadoop to work optimally with S3. Amazon was guarded about the details of this tweaking, so we'll have to take the company at its word as to the benefits of the optimizations. Nevertheless, if you have a large-scale distributed processing problem but a small-scale budget, you should familiarize yourself with Hadoop, then take Amazon's Elastic MapReduce for a spin.