Sunday, November 21, 2010

Making Hadoop more "functional" with Scala

Introduction


After spending more and more time working with Scala I found myself attempting to apply the language to more and more domains. One candidate was a long-standing issue with the mechanism used by Hadoop to express map, reduce and/or combine functions. These functions are expressed within the framework via a pair of interfaces, Mapper and Reducer. Jobs then add class references to concrete implementations of these interfaces.

The problem


So what's the problem? These interfaces largely exist to define a single function to perform either a map or reduce operation. Both Mapper and Reducer include a setup() and cleanup() method (as well as a run() method to change runtime behaviour if needed) but the heavy lifting for both is performed within a single method (map() and reduce() respectively). Strip away the overhead and it's clear that in a fair number of cases we really only need an anonymous block of code to implement our map or reduce functions. At least some of the setup work could be removed if this anonymous block of code could somehow "contain" the environment that was present when the function was defined.

It should be obvious that what's being described here is a closure. Scala supports closures, but so does Groovy, JRuby and several other JVM languages. Why prefer Scala to these options? It's all a matter of typing; my belief was that the typing of closures in Scala would provide easier integration with Hadoop. This assumption may prove to be incorrect in the long run (Clojure's strong Java integration might make it a better fit for example) but it does provide a starting point.

Note that defining these functions inline would prevent the kind of re-use provided by defining this functionality in distinct classes. This is only a concern if the map and/or reduce functions are sufficiently complex; if they reduce down to something in the neighbourhood of a one-liner then re-use isn't a large concern.

Finally, it's worth noting that the problem here does not stem from Hadoop itself. Implementing this functionality in discrete classes is a reasonable approach when the language doesn't provide a clear mechanism for expressing anonymous functions.

A sample


To make this more concrete let's analyze the "grep" sample job referenced in the document describing a single-node Hadoop setup. The main work is performed by a single job using a regex-based map implementation and a reducer which does little more than sum up long values. The meat of the mapping class is as follows:

org.apache.hadoop.mapreduce.lib.map.RegexMapper

public class RegexMapper extends Mapper {

public static String PATTERN = "mapreduce.mapper.regex";
public static String GROUP = "mapreduce.mapper.regexmapper..group";
private Pattern pattern;
private int group;

public void setup(Context context) {
Configuration conf = context.getConfiguration();
pattern = Pattern.compile(conf.get(PATTERN));
group = conf.getInt(GROUP, 0);
}

public void map(K key, Text value,
Context context)
throws IOException, InterruptedException {
String text = value.toString();
Matcher matcher = pattern.matcher(text);
while (matcher.find()) {
context.write(new Text(matcher.group(group)), new LongWritable(1));
}
}
}


The reducer implementation doesn't fare much better:


org.apache.hadoop.mapreduce.lib.reduce.LongSumReducer

public class LongSumReducer extends Reducer KEY,LongWritable> {

private LongWritable result = new LongWritable();

public void reduce(KEY key, Iterable values,
Context context) throws IOException, InterruptedException {
long sum = 0;
for (LongWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}

}


That's a lot of boilerplate code in order to implement a pair of very simple functions. Surely we can do better with a language that offers native support for many elements of functional programming.

An initial implementation


As a first step we convert the "grep" example into Scala and run it using Hadoop 0.21. The conversion itself was fairly straightforward and the results can be found on github. At present the port isn't much more than a translation of the existing Java code into more idiomatic Scala; the hope is that this code will provide a basis for future work. The code has been verified to run using the following steps:


  1. Add the Scala runtime to the Hadoop classpath by editing hadoop-env.sh:
    export HADOOP_CLASSPATH=$HOME/local/scala/lib/scala-library.jar

  2. Replace org.apache.hadoop.examples.Grep in mapred-examples with our Scala implementation:
    jar uvf hadoop-mapred-examples-0.21.0.jar org/apache/hadoop/examples/Grep*
  3. bin/hdfs namenode -format

  4. bin/start-dfs.sh

  5. bin/start-mapred.sh

  6. bin/hadoop fs -put conf input

  7. bin/hadoop jar hadoop-mapred-examples-0.21.0.jar grep input output-scala 'dfs[a-z.]+'

  8. bin/hadoop fs -cat output-scala/*



Future work


In order to get the most out of writing Hadoop jobs in Scala we need to be able to define these functions directly within the job class. This is complicated somewhat by the current Hadoop architecture; jobs are configured to reference mapper and/or reducer classes rather than instances of those classes. Something akin to Groovy's support for using a closure as an implementation of a class with a Single Abstract Method (SAM) might be useful here, but near as I can tell Scala does not offer such support.

4 comments:

  1. Take a look at the work I'm doing in MapReduce serialization. In particular, look at MapReduce-1183 where we want to replace the mapper, reducer, etc. classes with instances when the jobs are defined. That will help a lot.

    I already have a patch that has a more general serialization model that supports protocol buffers, thrift, writables, and avro objects in sequence files.

    ReplyDelete
  2. That patch is on Hadoop-6685. https://issues.apache.org/jira/browse/HADOOP-6685

    ReplyDelete
  3. Owen,

    Thanks for the pointers! It does look like MAPREDUCE-1183 would go a long way towards addressing this issue. Your serialization work does raise some interesting ideas for serialized closures in Scala and other JVM languages. Hmm, might have to look into that a bit more...

    ReplyDelete
  4. Though the hadoop online training gave me the much needed information about the basic hadoop concepts I learned more information like data, cloud, analytic grealy on thie website. Thanks for sharing.

    ReplyDelete