It may seem counter-intuitive but the brute force method of trial division works very well in this situation. In addition to being relatively easy to implement there are several entry points that might be used to increase performance. We might try to select fewer candidates for testing. Or perhaps we can minimize the number of tests for any specific candidate. Ideally we could do both.
As an initial optimization we observe that 2 is the only possible prime number; if we fix our candidate sequence to return 2 as a fixed initial value and then only generate odd values we'll keep our work down. We keep the number of comparisons on each candidate low by utilizing the principle that for an integer N we need only compare primes less than or equal to the square root of N in order to determine primality. The square root of two million is a little more than 1414 so we only need to compare a relatively small set of primes for each candidate.
An implementation of this algorithm in Scala looks something like this:
This code is quite performant on my commodity laptop:
[@varese src]$ time scala org.fencepost.Euler10
Result: 142913828922
real 0m18.876s
user 0m15.149s
sys 0m3.042s
Note that in this code the stream "primes" is a self-referential data type. An initial value of 2 is specified, but all subsequent values are determined by filtering the candidate pool using the primes defined (and returned) so far. Don't let the indirection of the "pred" function fool you; it's there only to make the code a bit easier to read. We could just as easily include the logic defined in "pred" as an inline function to the filter call within "primes".
The definition of self-referential data types is entirely dependent upon the idea of lazy evaluation. If a self-referential type were evaluated immediately we'd quickly find ourselves in a recursive loop with no termination condition. Using lazy evaluation, however, values are only computed as they are needed. We provide a function to generate new values from previous values and enough initial values to provide the necessary input(s) to the first invocation of this function; lazy evaluation handles the rest.
Clojure also includes extensive support for lazy evaluation within sequences; we should be able to use them to implement something similar. A bit of work is required to get the scoping right but eventually we come up with the following:
The Clojure implementation isn't as fast as our Scala version but it is still respectable; understand that we are computing just under 150,000 primes here:
[@varese clojure]$ time clojure euler10.clj
142913828922
real 1m2.860s
user 0m51.556s
sys 0m6.327s
As it happens I've been working through the Ruby chapter of Bruce Tate's Seven Languages in Seven Weeks. Bruce strongly encourages experimentation with the various languages described in this text (and I could use the practice) so let's make an attempt to implement the same algorithm in Ruby. The language doesn't offer support for self-referential data types or lazy evaluation but we can get close using an implementation of Enumerable that uses this algorithm to create the values our "each" method will yield. The code I came up with is the following:
I would ask the experienced Ruby hackers in the back to please keep the snickering down.
We test this implementation using JRuby, specifically RC2 of the upcoming 1.6 release. Why JRuby over the alternatives? The other languages we've considered here are JVM languages so it seemed sensible to remain on the JVM unless there was a compelling reason not to. And since JRuby is generally regarded as the fastest Ruby implementation going there was little incentive to leave.
[@varese ruby]$ time jruby Euler10.rb
142913828922
real 3m4.725s
user 0m17.733s
sys 2m41.426s
We've definitely violated Project Euler's "one minute rule" but we haven't strayed into the realm of completely unacceptable performance. That said, there's no doubt that we've taken a step backwards.
Self-referential data types can be a very concise way to represent a stream of data, even a stream of infinite length. They're just one of the ways lazy evaluation can help you think about problems in different ways. If your language of choice supports this feature it's worth your time to experiment with it.
No comments:
Post a Comment