Clojure optimizes recursive calls when the loop/recur constructs are used so we deploy them here. In addition we make significant use of destructuring. Each input chunk of data is routed to the appropriate function via a sequence of cond expressions. These functions are implemented as let statements which break the chunk into bytes (via destructuring) and then create bindings for the base64 characters representing these bytes by applying the relevant bit manipulations. The body of the let doesn't need to do much beyond combining these character bindings into an appropriate return value.
This refactoring offers the following benefits:
- The code is a lot cleaner and much easier to read and understand
- Using the same techniques we should be able to easily implement a decoding function
- We should see some increase in performance
So, how'd we do? The following is fairly representative:
bartok ~/git/base64-clojure $ clojure fencepost/test/compare_base64.clj
"Elapsed time: 85.219348 msecs"
"Elapsed time: 896.538841 msecs"
"Elapsed time: 315.582591 msecs"
"Elapsed time: 215.516695 msecs"
The recursive encoding operation is just north of a third faster. We still take about twice as long as commons-codec (an improvement on our previous performance) but we're now over four times faster than clojure-contrib.
As referenced above a decoding function was also completed using the same techniques described for the recursive encoding function. The performance increase for that operation was just as stark:
"Elapsed time: 33.148648 msecs"
"Elapsed time: 1494.2707 msecs"
"Elapsed time: 268.112751 msecs"
Still nowhere near as fast as commons-codec but we're a bit better than five times faster than clojure-contrib.
As always code can be found on github.