Sunday, June 19, 2011

First steps from node.js to concurrent JavaScript

I've always been somewhat ambivalent about node.js.

The project has received praise for advancing the idea of event-based server development. This argument never seemed terribly persuasive; Twisted and Event Machine (among others) have been plowing that soil for some time now and it's not clear that node.js adds anything new in this space. This does not mean that the project has made no contribution. The most impressive accomplishment of node.js has been to show what JavaScript can do when let out of it's browser-shaped cage. Crockford and others have put in a considerable amount of work to demonstrate that there's a fully-formed programming language hiding behind the event handlers and DOM manipulation. node.js provides further evidence that they were right all along.

The functional side of JavaScript (anonymous functions and/or closures specifically) lends itself to asynchronous network programming. But what about other paradigms? Suppose we're writing a concurrent application with multiple synchronous "threads" [1] of control. Let's further suppose that these "threads" implement a share-nothing architecture and communicate via messaging. Does JavaScript have anything to offer our application? We would expect the answer to be "yes"; in addition to anonymous functions/closures serving as message handlers we might also lean on the easy creation of arbitrary objects to achieve something resembling Erlang's tuples. But that's just speculation. In order to know for sure we should try our model on an actual concurrent problem or two.

But let's not get too far ahead of ourselves. Before we dig into any samples we should probably define our framework. We'll start with the Go language. Go's support for lightweight processes (i.e. goroutines) maps naturally onto the "threads" we refer to above so it's a natural match for our problem space. We can also easily define our goal in terms of goroutines: we wish to create the ability to implement a goroutine in JavaScript. Finally, Go's built-in support for native code should facilitate interaction with our JavaScript engine. While we're on that subject, we'll follow the lead of node.js and use v8 as our JavaScript implementation.

The ability to implement goroutines in JavaScript requires at least three components:


  • Obviously we require the ability to execute arbitrary JavaScript from within a Go program

  • We'll also need to add bindings to the v8 environment from within Go. This allows us to pass channels and other objects into the JavaScript code

  • JavaScript code running in v8 should be able to interact with Go code. Our JavaScript goroutines must be able to send and receive messages from Go channels



We begin with the execution of JavaScript from within Go. The language supports good integration with C code, but v8 is written in C++. A few previous efforts work around this mismatch by way of a C shim into the C++ code. This works but seems somewhat artificial. Go now supports the use of SWIG to interact with C++; in fact it's the recommended method for interacting with C++ code. We'll start there and see where it takes us.

Initially we try to expose the various C++ structures referenced in the v8 sample documentation to Go via SWIG, but this approach rapidly runs aground for a couple of reasons. First, the v8 API includes a few stack-allocated data structures which won't persist across SWIG calls. Second, the API also makes some use of nested C++ classes, a feature not yet supported by SWIG [2]. So we shift gears a bit; we should be able to define a set of classes and/or static functions which can do the heavy lifting for us. This approach enables us to execute JavaScript, but unfortunately we can't easily bind Go objects or expose Go code in the JS environment. v8 does allow us to define C++ functions that can be made available to that environment, but unfortunately SWIG can only expose C/C++ code to higher level languages; it can't do the inverse. As a consequence there's no way to make any C++ function we inject aware of the Go code that's interacting with it. JavaScript code thus cannot interact with channels, so at the moment the best we can do is to return a value from our JavaScript function which can then be accessed from within the Go code.


[@varese v8-golang]$ make buildshell
swig -c++ -go v8wrapper.i
g++ -c -I/home/fencepost/svn/v8/include -fpic v8wrapper.cc
g++ -c -I/home/fencepost/svn/v8/include -fpic v8wrapper_wrap.cxx
g++ -shared -L/home/fencepost/svn/v8 v8wrapper_wrap.o v8wrapper.o -o v8.so -lv8
8g v8.go
8c -I/home/fencepost/hg/go/src/pkg/runtime v8_gc.c
gopack grc v8.a v8.8 v8_gc.8
8g -I. v8shell.go
8l -L . -r . -o v8shell v8shell.8
[@varese v8-golang]$ more fib.js
// Simple accumulator-based Fibonacci implementation
function fib(a,b,count) {
function _fib(_a,_b,_count,_accum) {
if (_count <= 0) {
return _accum;
}
var n = _a + _b;
return _fib(_b,n,(_count - 1),_accum + "," + n);
}
return _fib(a,b,(count - 2),a + "," + b);
}
fib(0,1,10);

[@varese v8-golang]$ ./v8shell -script=fib.js
Using JavaScript file: fib.js
Result: 0,1,1,2,3,5,8,13,21,34


The above code was built with SWIG 2.0.4. Full code can be found on github.

This technique works but isn't very satisfying. We don't have communication via channels, which in turn prevents us from having long-running JavaScript as our goroutine. We're also constrained to returning strings from our JavaScript functions, hardly a general purpose solution. We'll try to move past these limitations in future work.


[1] Think "thread of execution" rather than something your operating system might recognize
[2] There are some workarounds but nothing that seemed viable for our purposes

Tuesday, May 24, 2011

Cassandra and Clojure: The Beginning of a Beautiful Friendship

Soon after I began working with Cassandra it became clear to me that if you were in the market for a platform for creating applications that interact with this database you could do a lot worse than Clojure. The lack of a query language [1] suggests that filtering and slicing lists of keys and columns might be a fairly common activity for apps powered by Cassandra. And while many languages support the map/filter/reduce paradigm Clojure's use of sequences throughout the core suggest a natural means to integrate this data into the rest of your application.

Cassandra itself provides an API that uses the Thrift protocol for manipulating data. We'll use this interface to implement a simple proof-of-concept application that might serve as a testbed for manipulating data managed by Cassandra in idiomatic Clojure. Note that the Clojure ecosystem already includes several open-source projects that connect Clojure to Cassandra: these include clj-cassandra and clj-hector, the latter leveraging the Hector Java client to do it's work. In order to keep things simple we choose to avoid any of these third-party libraries; it's not as if the Thrift interface imposes a heavy burden on us. Let's see how far we can get with what's already in the packaging.

That sounds great... so what exactly are we trying to do? Beginning with the database generated during our previous work with Cassandra we should be able to access sets of keys within a keyspace and a set of columns for any specific key. These structures should be available for manipulation in idiomatic Clojure as sequences. Ideally these sequences would be at least somewhat lazy and transparently support multiple datatypes. [2]

Using the Thrift interface requires working with a fair number of Java objects representing return types and/or arguments to the various exposed functions. My Clojure isn't yet solid enough to hash out Java interop code without flailing a bit so we kick things off with a Scala implementation. This approach allows us to simplify the interop problem without sacrificing the functional approach, all within a language that is by now fairly familiar.

The Scala code includes a fair number of intermediate objects but is otherwise fairly clean:



Translating this code into Clojure is more straightforward than expected:



These results seem fairly promising, although we're nowhere near done. This code assumes that all column names and values are strings, a perfectly ridiculous assumption. We also don't offer any support for nested data types, although in fairness this was a failing of our earlier work as well. Finally we haven't built in much support for lazy evaluation; we lazily convert column names to Clojure keywords but that's about it. But fear not, gentle reader; we'll revisit some or all of these points in future work.

[1] At least until CQL arrives in Cassandra 0.8
[2] We won't be able to meet these last two goals in our initial implementation, but with any luck we'll be able to revisit them in future work

Sunday, May 1, 2011

Scala, Bloom Filters and Shakespeare: Introduction

The Problem


Maybe it was a recent New York Times review of a new production of "Macbeth" starring John Douglas Thompson. Maybe it was a reflection on Kurosawa's use of Shakespearean plots and devices within some of his best work. Maybe it was just one of those random thoughts that pop into your head while walking out the door in the morning. It's origin may be hidden from me now, but recently a question entered my mind and refused to leave:

What percentage of the words in an arbitrary play by Shakespeare cannot be found in a common modern dictionary?

There's no doubt that some of Shakespeare's language would now be considered archaic. Quite a bit of it is still in common use today, however, and some usages have made their way into the lexicon via the modest success enjoyed by the Bard through the years. So how big is that set of archaic words? Is it a majority, a small minority or something else entirely?

After pondering a few approaches to this question it became clear that there was room for a fair amount of further work. A standalone program could help us answer this question but there was no reason to stop there. The solution to this problem lends itself naturally to decomposition, and that fact in turn suggests that this problem might be an interesting vehicle for exploring more complex solutions. With any luck we'll get to this exploration in later posts, but for now our goal is to describe the problem space and provide the general framework of a solution.

A Few Solutions


Broadly speaking we will answer this question by constructing a filter containing all the terms in our dictionary. We will then extract all individual words from the script of an arbitrary play and apply our filter to each of these words. Simple, right?

We might consider using a Bloom filter to implement our filter. We would expect a Bloom filter to be a very compact yet very efficient representation of the contents of our input dictionary. The issue is that any Bloom filter would produce some number of false positives, values that for which our filter should return false (i.e. "not present") but return true instead. This problem is offset somewhat by the fact that a negative result from a Bloom filter will always be accurate, but in the end this approach will give us at best a close approximation to the actual value. For our purposes this should be adequate enough.

Note: this post is not intended to be a primer on Bloom filters or their implementation. If you're looking for such a thing there are already excellent sources available.

An alternate approach might be to implement a filter using a search index such as Lucene. The result would not be as compact as a Bloom filter but it would certainly offer a more accurate result.

To make things interesting we choose to create both versions. We can live with a good approximation so we focus on the Bloom filter implementation. The Lucene implementation will be used primarily as a baseline for comparing performance and accuracy of results. In order to justify this approach we impose the following additional constraints on the Bloom filter implementation:


  • It must outperform the Lucene version

  • It should use a minimal amount of memory



The resulting code can be found on github.

Observations Along the Way



Extracting words from plays


Shakespeare's plays are available in XML format from ibiblio. We are still tasked with extracting individual words from these documents, however, and our concern for resource usage prevents us from simply reading the entire document into memory and querying it via XPath or XQuery. In the end we resolved this issue by implementing what is at heart a finite state machine (FSM) driven by inputs obtained from the XML pull parser provided by the Scala library.

Minimizing false positives


There are known approaches to constraining the number of false positives produced by a Bloom value to some reasonable maximum. Unfortunately the size of our data set and our resource constraints prevent us from leveraging these techniques. The number of false positives is also inflated by some of our implementation choices.

The dictionary used in our implementation was provided by the "words" package in Fedora Core 13. The dictionary in this package contains approximately 480,000 words. Assuming distinct internal hash values for every word our Bloom filter should support all 19-bit input values (since the base-2 log of 480,000 is 18.87). To avoid congestion we strive for a filter that is ~50% full, which in turn requires support of all 20-bit inputs. Note that our space constraints make the use of a second hash function unwise; we'd need a much larger capacity for a population size as large as ours. Also note that Bloom filters use a bit set for a backing store and the memory usage of a bit set is driven by the largest value contained in that set. It follows that our filter will use no more than (2 ^ 20)/8 = 131KB of memory.

Unfortunately a few of our implementation choices help to artificially inflate our false positive rate. These include:


  • Scala's bit set data structure accepts only non-negative values while the MurmurHash function we use internally can generate arbitrary 32-bit output (including the full range of negative integers on the JVM). We resolve this issue by using the absolute value of the MurmurHash output, the result of which is that two output values now result in the same output from our internal function(s).

  • Bit set size is constrained by masking some number of bits from the MurmurHash output. This technique also causes multiple MurmurHash outputs to result in the same output from our internal function(s).



The results below suggest a false positive rate on the order of 4%, an exceedingly high value.

Immutable and mutable data structures


The end of a previous post described my struggles with deciding when algorithms based on mutable data were acceptable within Scala's ethos. The language has an obvious bias against mutable data, but it's a bias that seems quite reasonable; the arguments in favor of immutability are quite convincing. Yet mutability is explicitly supported by the language and is understood to be useful in some circumstances.

An example presented itself while completing the Bloom filter implementation. The original construction used a foldLeft operation to reduce the set of internal hash values into a single bit set:


private val bloom = (BitSet() /: (vals map evalAllFns))(_ ++ _)


Unfortunately this approach required the creation of a new BitSet instance for each input value, a potentially costly process. The performance was quite poor:


[@varese scala_bloom]$ time scala -classpath target/scala_2.8.1/classes:lib/lucene-core-3.1.0.jar org.fencepost.bloom.FilterApplication macbeth.xml /usr/share/dict/words lucene
Filter constructed in 12295 milliseconds
Total words in script: 16347
Matches: 14328
Searching completed in 51 milliseconds

real 0m17.116s
user 0m9.666s
sys 0m4.980s
[@varese scala_bloom]$ time scala -classpath target/scala_2.8.1/classes:lib/commons-lang-2.6.jar org.fencepost.bloom.FilterApplication macbeth.xml /usr/share/dict/words bloom
Filter constructed in 133928 milliseconds
Total words in script: 16347
Matches: 15046
Searching completed in 10 milliseconds

real 2m19.174s
user 0m27.765s
sys 1m40.000s


Clearly searching wasn't the problem. The performance was vastly improved by shifting to something based on mutable data structures:


import scala.collection.mutable.BitSet

private val bloom = BitSet()
for (aval <- (vals flatMap evalAllFns)) { bloom += aval }



[mersault@varese scala_bloom]$ time scala -classpath target/scala_2.8.1/classes:lib/commons-lang-2.6.jar org.fencepost.bloom.FilterApplication macbeth.xml /usr/share/dict/words bloom
Filter constructed in 7525 milliseconds
Total words in script: 16347
Matches: 14992
Searching completed in 9 milliseconds

real 0m11.872s
user 0m6.779s
sys 0m4.476s


In this case the mutability is contained within a single class: the bit set is itself a val and is never modified after being generated at construction. That said, the process for determining when mutable data should be used is still somewhat opaque.

Tuesday, March 29, 2011

Data Models and Cassandra

In a pair of previous posts we considered how we might use CouchDB to store and query a collection of tweets and tweet authors. The goal was to explore the document-oriented data model by constructing queries across multiple types of documents in a loose approximation to a join statement in SQL. Cassandra implements a data model built on ColumnFamilies which differs from both SQL and document-oriented databases such as CouchDB. Since we already have a problem space defined we begin our exploration of Cassandra by attempting to answer the same questions using a new data model.

The examples below are implemented in Python with a little help from a few excellent libraries:


  • Pycassa for interacting with Cassandra.

  • Twitter tools when we have to interact with Twitter via their REST API



We'll begin by looking at how we might query an Cassandra instance populated with data in order to find the answers to the questions in our problem space. We'll close by briefly discussing how to get the data into Cassandra.

We should be all set; bring on the questions!

For each author: what language do they write their tweets in and how many followers do they have?



The organizing structure of the Cassandra data model is the column family defined within a keyspace. The keyspace is exactly what it sounds like: a collection of keys, each identifying a distinct entity in your data model. Each column family represents a logical grouping of data about these keys. This data is represented by one or more columns, which are really not much more than a tuple containing a name, value and timestamp. A column family can contain one or more columns for a key in the keyspace, and as it turns out you have a great deal of flexibility here; columns in a column family don't have to be defined in advance and do not have to be the same for all keys.

We begin by imagining a column family named "authors" in a keyspace defined by the user's Twitter handle or screen name. Assume that for each of these keys the "authors" column family contains a set of columns, one for each property returned by the "user/show" resource found in the Twitter REST API. Let's further assume that included in this data are fields named "lang" and "followers_count" and that these fields correspond to exactly the data we're looking for. We can satisfy our requirement by using a range query to identify all keys that fall within a specified range. In our case we want to include all alphanumeric screen names so we query across the range of printable ASCII characters. The Pycassa API makes this very easy [1]:



The result is exactly what we wanted:

[@varese src]$ python Query1.py
Key: Alanfbrum, language: en, followers: 73
Key: AloisBelaska, language: en, followers: 8
Key: DASHmiami3, language: en, followers: 11
Key: DFW_BigData, language: en, followers: 21


How many tweets did each author write?



Okay, so we've got the idea of a column family; we can use them to define a set of information for keys in our keyspace. Clearly this is a useful organizing principle, but in some cases we need a hierarchy that goes a bit deeper. The set of tweets written by an author illustrates just such a case: tweets are written by a specific author, but each tweet has data of it's own (the actual content of the tweet, a timestamp indicating when it's written, perhaps some geo-location info, etc.). How can we represent this additional level of hierarchy?

We could define a new keyspace consisting of some combination of the author's screen name and the tweet ID but this doesn't seem terribly efficient; identifying all tweets written by an author is now unnecessarily complicated. Fortunately Cassandra provides a super column family which meets our needs exactly. The value of each column in a super column family is itself a collection of regular columns.

Let's apply this structure to the problem at hand. Assume that we also have a super column family named "tweets" within our keyspace. For each key we define one or more super columns, one for each tweet written by the author identified by the key. The value of any given super column is a collection of columns, one for each field contained in the results we get when we search for tweets using Twitter's search API. Once again we utilize a range query to access the relevant keys:



Running this script gives us the following:

[@varese src]$ python Query2.py
Authors: Alanfbrum, tweets written: 1
Authors: AloisBelaska, tweets written: 1
Authors: DASHmiami3, tweets written: 1
Authors: DFW_BigData, tweets written: 1
...
Authors: LaariPimenteel, tweets written: 2
Authors: MeqqSmile, tweets written: 1
Authors: Mesoziinha, tweets written: 2


How many tweet authors are also followers of @spyced?



This problem presented the largest challenge when trying to model this problem space using CouchDB. Somehow we have to relate one type of resource (the set of followers for a Twitter user) to the set of authors defined in our "authors" column family. The Twitter REST API exposes the set of user IDs that follow a given user, so one approach to this problem might be to obtain these IDs and, for each of them, query the "authors" table to see if we have an author with a matching ID. As for the user to search for... well, when we were working with CouchDB we used @damienkatz so it only seems logical that we use Jonathan Ellis (@spyced) in this case.

Newer versions of Cassandra provide support for indexed slices on a column family. The database maintains an index for a named column within a column family, enabling very efficient lookups for rows in which the column matches a simple query. Query terms can test for equality or whether a column value is greater than or less than an input value [2]. Multiple search clauses can be combined within a single query, but in our case we're interested in strict equality only. Our solution to this problem looks something like the following:



The result looks pretty promising:

[@varese src]$ python Query3.py
tlberglund
evidentsoftware
sreeix
...
joealex
cassandra
21


Some spot checking of these results using the Twitter Web interface seems to confirm the results. [3]

Populating the data



So far we've talked about accessing data from a Cassandra instance that has already been populated with the data we need. But how do we get it in there in the first place? The answer to this question is a two-step process; first we create the relevant structures within Cassandra and then we use our Python tools to gather and add the actual data.

My preferred tool for managing my Cassandra instance is the pycassaShell that comes with Pycassa. This tool makes it's easy to create the column families and indexes we've been working with:

SYSTEM_MANAGER.create_keyspace('twitter',replication_factor=1)
SYSTEM_MANAGER.create_column_family('twitter','authors',comparator_type=UTF8_TYPE)
SYSTEM_MANAGER.create_column_family('twitter','tweets',super=True,comparator_type=UTF8_TYPE)
SYSTEM_MANAGER.create_index('twitter','authors','id_str',UTF8_TYPE)


There are plenty of similar tools around; your mileage may vary.

When it comes to the heavy lifting, we combine Pycassa and Twitter tools into a simple script that does everything we need:



[1] For sake of brevity this discussion omits a few details, including the configuration of a partitioner and tokens in order to use range queries effectively and how keys are ordered. Consult the project documentation and wiki for additional details.

[2] Here "greater than" and "less than" are defined in terms of the type provided at index creation time.

[3] You could complain that we're cheating a bit here. When we were working with CouchDB we were tasked with joining two distinct "types" of resources using a map-reduce operation applied to documents within the database; that was the whole point of the exercise. Here we're just querying the DB in response to some external data source. This is true to a point, but in my defense it's worth noting that we could easily create a "followers" column family containing the followers @spyced and then execute the same logic against this column family rather than the Twitter REST API directly. This isn't a very satisfying answer, however; this issue could very well be taken up in a future post.

Monday, March 7, 2011

A Few More Words on Laziness

A few updates to the recent post on lazy evaluation. While most of the discussion below is directed at the Ruby implementation the same principles could also be applied to the Scala and/or Clojure versions. The implementations in these two languages are already fairly speedy, however, so for now we'll focus on bringing the Ruby version into line.

So, without further ado...

Don't blame JRuby


Of the three implementations discussed in the previous post the Ruby version was by far the worst in terms of performance. This fact should in no way be interpreted as a criticism of JRuby itself; the fault rests entirely on the code being executed (and by extension the abilities of the programmer who wrote it), not the platform it runs on. To prove the point, consider the following:


[@varese ruby]$ more Euler10Primes.rb
require 'prime.rb'

puts Prime.each(2000000).reduce(0) { |i,total| total + i }
[@varese ruby]$ time jruby --1.9 Euler10Primes.rb
142913828922

real 0m27.737s
user 0m20.807s
sys 0m6.181s


Clearly we have work to do.

Remember that part about testing fewer candidates?


Our intial algorithm tested a candidate set of all odd numbers beginning with three. The underlying logic is two-fold:


  • Any even number is by definition divisible by two (a prime) and therefore not prime

  • We can exclude these numbers by generating candidates in a clever way (starting from an odd value and incrementing by two) rather than testing every candidate to see if it's even or odd



As a first optimization we consider this process for generating candidates. Is there any room for improvement here? As it turns out there is; approximately 20% of the time we're testing a candidate that could be excluded with zero computation. Permit me to explain.

Multiples of five (a prime) always end in zero or five. Anything ending in zero must be even (and therefore already excluded) but so far we haven't excluded anything ending in five. And since one out of every five of our current candidate pool ends in five we're doing exactly 20% more work than we need to. Okay, so how do we keep these values out of our set of candidates? We know that odd numbers must end with a fixed range of values (1,3,5,7 or 9). If we exclude five from that list and then add the remaining values to successive multiples of ten we'll generate the full set of potential candidate values.

This optimization alone yields some benefit:


[@varese ruby]$ time jruby Euler10.rb
142913828922

real 2m42.012s
user 0m14.394s
sys 2m23.462s


There's still plenty of room for improvement.

Laziness can be fleeting


Languages that fully support lazy evaluation try not to return values until absolutely necessary. Suppose you want to return the first three items from a lazily defined sequence in Clojure. You might use "(take 3 coll)", but note that this doesn't return an array containing three values; it returns another lazy sequence. You could thus chain operations together without completely evaluating (or "realizing" as the Clojure folks are fond of saying) anything. Given a predicate "pred", for example, you could identify which of the these values satisfy pred using "(filter pred (take 3 coll))". This expression also returns a lazy sequence; if you want to return the values you need something like "(doall (filter pred (take 3 coll)))"

Ruby does not support lazy evaluation. We've been using "yield" to approximate lazy evaluation but it's only an approximation. In Ruby "coll.take 5" will return an array containing the first five items in the collection rather than another Enumerable that will generate these values as needed. As a result we're once again doing more work than we need to. And unfortunately this is exactly what we find in our initial Ruby implementation:


if @primearr.take_while {|i| i <= root }.all? {|i| @nat % i != 0}
@primearr << @nat
yield @nat
end


On the plus side we're only returning all primes up to the square root of our candidate. On the negative side we completely negate the fail-fast functionality of the "all?" method. Wouldn't it be better if we could exit this process as soon as we find a prime that the candidate divides evenly into?

This problem is actually a bit trickier than it looks. We can exit our test by finding a prime that our candidate divides into (indicating that the candidate isn't prime) or by finding a prime greater than the square root of the candidate (indicating that the candidate is prime). How can we distinguish between these two cases? One way to implement this test is to combine both conditions into a single predicate and find the first prime that satisfies it. If that prime is greater than the square root of the candidate we have a prime, otherwise we don't. We know this find operation will always be successful since eventually we'll find a prime greater than the square root of the candidate.

The resulting code (including the candidate generation fix mentioned above) looks like this:



So, how'd we do?


[@varese ruby]$ time jruby Euler10.rb
142913828922

real 0m43.730s
user 0m33.408s
sys 0m9.040s


That's more like it! This code overtakes the original Clojure implementation to move into second place. We're still slower than the implementation using the prime number generator that comes with JRuby but we've made up quite a bit of ground.

You don't need a JVM to be lazy


If you're interested in lazy evaluation you could do a lot worse than spending some time with Haskell. The language fully embraces lazy evaluation which, according to some people, matters a great deal to the utility of functional programming.

An initial implementation of our algorithm in Haskell looks something like this:



Unfortunately the performance is dreadful:


[@varese haskell]$ ghc -o euler10 euler10.hs
[@varese haskell]$ time ./euler10
142913828922

real 4m50.253s
user 0m8.976s
sys 4m35.131s


Once again it's important to realize that these numbers don't reflect on Haskell or the GHC in any way. They do reflect on my utter lack of experience and/or skill with this language. There's almost certainly a better (and probably more idiomatic) way to implement this algorithm in Haskell, although in my defense even this performance is orders of magnitude better than what we get using the Sieve of Eratosthenes implementation from Hutton's text:


[@varese haskell]$ more euler10sieve.hs
module Main where

primes = sieve [2..]
sieve(p:xs) = p:sieve[x|x <- xs, x `mod` p /= 0]

main = print (foldr (+) 0 (takeWhile (<2000000) primes))
[@varese haskell]$ ghc -o euler10sieve euler10sieve.hs
[@varese haskell]$ time ./euler10sieve
142913828922

real 401m25.043s
user 0m0.069s
sys 394m13.061s


In the end, of course, this probably indicates only that the Sieve of Eratosthenes is probably not an ideal solution to this particular problem.

Thursday, March 3, 2011

In Defense of Laziness and Being Self-Centered

On a whim I decided to dig into a few more Project Euler problems. Problem 10 was up next, and on the face of things the issue seemed fairly straightforward: compute the sum of all prime numbers less than two million. Unfortunately the simple implementation of the Sieve of Eratosthenes referenced in the Scala documentation (and used in a solution to a previous problem) promptly blows the stack when trying to perform the necessary computations. Clearly we need a more efficient way to compute large collections of primes.

It may seem counter-intuitive but the brute force method of trial division works very well in this situation. In addition to being relatively easy to implement there are several entry points that might be used to increase performance. We might try to select fewer candidates for testing. Or perhaps we can minimize the number of tests for any specific candidate. Ideally we could do both.

As an initial optimization we observe that 2 is the only possible prime number; if we fix our candidate sequence to return 2 as a fixed initial value and then only generate odd values we'll keep our work down. We keep the number of comparisons on each candidate low by utilizing the principle that for an integer N we need only compare primes less than or equal to the square root of N in order to determine primality. The square root of two million is a little more than 1414 so we only need to compare a relatively small set of primes for each candidate.

An implementation of this algorithm in Scala looks something like this:



This code is quite performant on my commodity laptop:


[@varese src]$ time scala org.fencepost.Euler10
Result: 142913828922

real 0m18.876s
user 0m15.149s
sys 0m3.042s


Note that in this code the stream "primes" is a self-referential data type. An initial value of 2 is specified, but all subsequent values are determined by filtering the candidate pool using the primes defined (and returned) so far. Don't let the indirection of the "pred" function fool you; it's there only to make the code a bit easier to read. We could just as easily include the logic defined in "pred" as an inline function to the filter call within "primes".

The definition of self-referential data types is entirely dependent upon the idea of lazy evaluation. If a self-referential type were evaluated immediately we'd quickly find ourselves in a recursive loop with no termination condition. Using lazy evaluation, however, values are only computed as they are needed. We provide a function to generate new values from previous values and enough initial values to provide the necessary input(s) to the first invocation of this function; lazy evaluation handles the rest.

Clojure also includes extensive support for lazy evaluation within sequences; we should be able to use them to implement something similar. A bit of work is required to get the scoping right but eventually we come up with the following:



The Clojure implementation isn't as fast as our Scala version but it is still respectable; understand that we are computing just under 150,000 primes here:


[@varese clojure]$ time clojure euler10.clj
142913828922

real 1m2.860s
user 0m51.556s
sys 0m6.327s


As it happens I've been working through the Ruby chapter of Bruce Tate's Seven Languages in Seven Weeks. Bruce strongly encourages experimentation with the various languages described in this text (and I could use the practice) so let's make an attempt to implement the same algorithm in Ruby. The language doesn't offer support for self-referential data types or lazy evaluation but we can get close using an implementation of Enumerable that uses this algorithm to create the values our "each" method will yield. The code I came up with is the following:



I would ask the experienced Ruby hackers in the back to please keep the snickering down.

We test this implementation using JRuby, specifically RC2 of the upcoming 1.6 release. Why JRuby over the alternatives? The other languages we've considered here are JVM languages so it seemed sensible to remain on the JVM unless there was a compelling reason not to. And since JRuby is generally regarded as the fastest Ruby implementation going there was little incentive to leave.


[@varese ruby]$ time jruby Euler10.rb
142913828922

real 3m4.725s
user 0m17.733s
sys 2m41.426s


We've definitely violated Project Euler's "one minute rule" but we haven't strayed into the realm of completely unacceptable performance. That said, there's no doubt that we've taken a step backwards.

Self-referential data types can be a very concise way to represent a stream of data, even a stream of infinite length. They're just one of the ways lazy evaluation can help you think about problems in different ways. If your language of choice supports this feature it's worth your time to experiment with it.

Friday, February 4, 2011

Forks and joins in Scala

A short while ago Alex Miller (aka puredanger) wrote up a blog entry detailing how to use Clojure to interact with the new fork/join concurrency framework to be included with Java7 (and already available from Doug Lea's site). The meat of Alex's solution is a Java wrapper for Clojure's IFn type which integrates nicely with the fork/join framework. At the time there was some discussion about whether a Scala implementation would require an equivalent amount of scaffolding. It turns out that supporting this functionality in Scala requires a bit more effort than one might expect, although the problems one faces are quite different than what Alex worked through in Clojure. We review the steps to a working Scala implementation and see what we can learn from them.

Before we move on, a few notes about fork/join itself. A full overview is outside the scope of this post, however details can be found in the jsr166y section of Doug Lea's site. A solid (although slightly out-of-date) review can be found in a developerWorks article by Brian Goetz.

We begin as simply as possible; a straight port of Alex's code (in this case an implementation of Fibonacci using fork/join) to Scala. We create a named function fib that will be recursively called by our RecursiveTasks. We also use an implicit conversion to map no-arg anonymous functions onto RecursiveTask instances using the technique for supporting SAM interfaces discussed in an earlier post. The resulting code looks pretty good but an attempt to build with sbt gives an unexpected error:


[info] == test-compile ==
[info] Source analysis: 2 new/modified, 0 indirectly invalidated, 0 removed.
[info] Compiling test sources...
[error] .../src/test/scala/org/fencepost/forkjoin/ForkJoinTestFailOne.scala:23: method compute cannot be accessed in jsr166y.RecursiveTask[Int]
[error] (f2 compute) + (f1 join)
[error] ^
[error] one error found


A bit of investigation reveals the problem; RecursiveTask.compute() is a protected abstract method in the fork/join framework. The compute method in the class returned by our implicit conversion consists of nothing more than a call to fib.apply() but there's no way to know this when fib is defined. It follows that an attempt to access RecursiveTask.compute() from within the body of fib is (correctly) understood as an attempt to access a protected method.

How does Alex get around this problem in Clojure? He doesn't; the problem is actually resolved via the Java wrapper. Note that in his Java code compute() has been "elevated" to be a public method. He's thus able to call this method without issue from his "fib-task" function. Without this modification his code fails with similar errors [1].

Strangely, there's no obvious way to resolve this issue within Scala itself. The language provides protected and private keywords to restrict access to certain methods but there is no public keyword; methods are assumed to be public if not annotated otherwise. As a consequence there's no obvious way to "raise" the access level of a method in a subclass. We work around this constraint by way of an egregious hack; we define a new subclass of RecursiveTask containing a surrogate method that provides public access to compute. We then return this subclass from our implicit conversion.

Our new implementation now compiles, but when we try to actually execute the test with sbt we see a curious error; the test starts up and then simply hangs:


[info] == test-compile ==
[info]
[info] == copy-test-resources ==
[info] == copy-test-resources ==
[info]
[info] == test-start ==
[info] == test-start ==
[info]
[info] == org.fencepost.forkjoin.ForkJoinTestFailTwo ==
[info] Test Starting: testFibonacci


Well, this isn't good. What's going on here?

The key point to understand is that the implicit conversion is called every time an existing value needs to be converted to a different type. As written the calls to f1.fork() and f1.join() both require conversion, and the implicit conversion function in play here will return a new instance on each invocation. This means that even though the input to the conversion function is identical in both cases (f1) the object that invokes fork() will be a different object than that which invokes join(). For fork/join tasks this matters; join() is called on an object that was never forked(). Voila, an unterminated wait.

We can resolve this issue one of two ways:


  • Caching previous values returned by the implicit conversion function and re-using them when appropriate

  • Explicitly specifying the type of f1 and f2 as our wrapper subclass. This has the effect of forcing implicit conversion at the time of assignment rather than when we call fork() and join()



I chose the first approach, mainly because it seemed a bit cooler. Oh, and it's a bit cleaner logically as well. The final result runs without issue.





[1] Using Alex's code (as published on his blog):


bartok ~/git/puredanger_forkjoin $ clojure process.clj
(1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181)


After changing IFnTask.compute() to protected (with no other changes):


bartok ~/git/puredanger_forkjoin $ clojure process.clj
(Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at jline.ConsoleRunner.main(Unknown Source)
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: No matching field found: compute for class revelytix.federator.process.IFnTask (process.clj:0)
at clojure.lang.Compiler.eval(Compiler.java:5440)
at clojure.lang.Compiler.load(Compiler.java:5857)
at clojure.lang.Compiler.loadFile(Compiler.java:5820)
at clojure.main$load_script.invoke(main.clj:221)
at clojure.main$script_opt.invoke(main.clj:273)