Distributed Ruby with the MagLev VM

GemStone team made a splash with MagLev at RailsConf '08 where they attracted a fair dose of attention from the attendees. Based on an existing GemStone/Smalltalk VM, it promised a lot of inherent advantages: 64-bit, JIT, years of VM optimizations, and built-in persistence and distribution layers. Since then the team has been making steady progress, which recently resulted in the announcement of the first public alpha. In fact, the project appears to be on track for 1.0 status later this year, alongside with IronRuby, MacRuby, and Rubinius.

However, while the initial focus centered around the potential speed improvements offered by the VM, it is the persistence and distribution aspects of the runtime which make it stand out - if it happens to be faster, so much the better. Based on the Smalltalk VM, it offers integrated persistence (with ACID semantics) and distribution. In other words, you can treat MagLev as a distributed database that is capable of running Ruby code and storing native Ruby bytecode internally. Now that's a mouthful, let's see what it actually means.

MagLev VM: Features & Limitations

The goal of the GemStone team is to write as much of MagLev as possible in Ruby (the standard libraries, the parser, etc), which has already resulted in some good collaboration and synergies with the Rubinius project. As of the first public beta release, the project passes over 27,900 RubySpecs, features a pure ruby parser (slightly modified fork of ruby_parser), and runs RubyGems 1.3.5 out of the box. Popular gems such as rack, sinatra, and minitest all run unmodified, and there is even work on FFI support for C and Smalltalk extensions.

The end goal is full RubySpec compatibility, support for Ruby 1.9, and of course, running Rails - a stripped down version was demoed at RailsConf '09, but more work still needs to be done to make it fully compatible. The VM also ships with a MySQL driver, which means that you can use MagLev as any other Ruby runtime to power your applications, or, you could leverage the built-in persistence API's.

MagLev has a distinctly different VM architecture which allows it to persist and share both code and data between multiple runtimes and execution cycles, all through a straight-forward Ruby API! Incidentally, this is also the reason for lack of support of several ObjectSpace methods (garbage_collect, each_object), as the enumeration could potentially mean retrieving gigabytes of persistent objects.

To get started, install MagLev via RVM, or follow the simple instructions on the wiki.

MagLev VM Architecture

The first thing you will notice about working with MagLev is that before you can run the interpreter, you will have to launch the MagLev service itself (maglev start). Turns out, unlike other Ruby VM's, all of the core Ruby classes, and all other persisted code and data actually lives in a separate "stone" process. The VM's ("gems"), connect to the stone and retrieve all of their data from this service. Ruby classes are stored as bytecode in the stone server, which is transported via shared memory for local connections, and via optimized binary protocol for remote connections, to the local interpreter and then compiled down to native machine code.

This is how object persistence is made possible in MagLev: the stone server is a standalone process that acts as a database for your Ruby bytecode! The added advantage is that the stone server supports full ACID semantics, which means that multiple processes can interact with the same repository and share state, objects, and code. A simple example of sharing data between multiple runs:

 # persist a string in the stone server
 Maglev::PERSISTENT_ROOT[:hello] = "world"
 Maglev.commit_transaction

 # $ maglev-ruby -e 'p Maglev::MAGLEV_PERSISTENT_ROOT[:hello]'
 # > "world"

That covers a simple key-value example, but Maglev is also capable of transparently persisting entire object graphs without any data-modeling impedance mismatch:

graph_node =<<-EOS
  class Graph
    def initialize; @nodes = []; end
    def push(node); @nodes.push node; end
  end
  class Node; end
EOS

# commit Graph class's bytecode into stone server
# - can also load external file: load 'class.rb'
Maglev.persistent { eval graph_node }

# build a simple in memory graph
g = Graph.new
g.push Node.new
g.push Node.new

# commit in-memory graph to stone server
Maglev::PERSISTENT_ROOT[:data] = g
Maglev.commit_transaction

############################
# in different process / VM:
graph = Maglev::PERSISTENT_ROOT[:data]
puts graph.inspect
# > #<Graph:0xa205f01 @nodes=[#<Node:0xa202d01>, #<Node:0xa202c01>]>
maglev - GemStone Maglev Ruby Repository

Instead of using an ORM to map Ruby classes to rows or documents in a database, you can simply store the objects directly in the stone server and interact with them through multiple processes, all without any extra conversions or additional infrastructure. The only caveat is that you would have to build your own indexing structures to power search and lookups beyond the key-value semantics. The KD-Tree example is a great showcase of the power and flexibility this can enable.

MagLev at Scale & In-Production

While the "stone" server persists all the core Ruby classes and any additional data, the VM's ("gems") are not free. According to the documentation, each VM takes ~30Mb of memory at boot time and starts growing from there. On the other hand, the shared memory communication is extremely efficient, which means that hundreds of VM's can be run in parallel on a single box. GemStone claims production deployments of their Smalltalk VM on 64-128 core machines with up to 512GB RAM, running hundreds of concurrent VM's, and achieving over 10K transactions per second (TPS) on their "stone" servers - impressive numbers!

With the new Smalltalk VM (3.0) on the horizon and years of production optimization and research, MagLev is definitely a project to watch. GemStone team has recently started a blog, opened up a Google group, and are starting to produce some great content to help Rubyists leverage their platform. What is missing now are the deployments, case studies, and new frameworks that can leverage all of these features - though, I'm sure, that will come.

Ilya GrigorikIlya Grigorik is a web ecosystem engineer, author of High Performance Browser Networking (O'Reilly), and Principal Engineer at Shopify — follow on Twitter.