Ruby EventMachine - The Speed Demon

Ruby EventMachine is a framework, which depending on who you talk to, either yields a lot of excitement (Evented Mongrel, Analogger, Evented Starling, etc.) or its fair share of criticism. In part, the FUD is due to the mismatch of the language used, and the underlying implementation - namely, the Reactor pattern. The reactor design pattern is a concurrent programming pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers.

Why Reactor to begin with?

Steeped in the tradition of forking / threaded web-servers I found myself rather surprised when I joined one of the research projects at University of Waterloo a couple of years back: we were benchmarking different web-server architectures, and top performers were all event-driven servers.As I pestered everyone with questions, I quickly realized why - in an environment with hundreds of thousands requests a second, forking and context switching associated with thread management become prohibitively expensive (fork is worst performer, as it does a memory copy on the parent process every time). Whereas by comparison, a tight and highly optimized event-loop really shines when it comes to performance under heavy loads.

EventMachine and Reactor pattern

Listening to a couple of recent presentations on different Ruby application server alternatives, I've come across a consistent comment: "Evented servers are really good for very light requests, but if you have a long-running request, it falls down on its face." Technically, valid, but in practice, not necessarily true. Let's start with the simplest example:

require 'rubygems'
require 'eventmachine'
require 'evma_httpserver'

class Handler  < EventMachine::Connection
  include EventMachine::HttpServer

  def process_http_request
    resp = EventMachine::DelegatedHttpResponse.new( self )

    sleep 2 # Simulate a long running request

    resp.status = 200
    resp.content = "Hello World!"
    resp.send_response
  end
end

EventMachine::run {
  EventMachine::start_server("0.0.0.0", 8080, Handler)
  puts "Listening..."
}

# Benchmarking results:
#
# > ab -c 5 -n 10 "http://127.0.0.1:8080/"
# > Concurrency Level:      5
# > Time taken for tests:   20.6246 seconds
# > Complete requests:      10

Here we've built the simplest possible HTTP web-server using EventMachine. To test it, we ran ab (Apache Bench), and set the concurrency to 5 (-c 5), and number of requests to 10 (-n 10). Time to process is ~20 seconds, which makes sense, because as expected the Reactor is processing each request in synchronous fashion, effectively overriding our concurrency setting to 1. Hence, 10 requests, 2 seconds each, and the math works out!

EventMachine: Reactor with lightweight concurrency?

The synchronous nature of the Reactor pattern is the bottleneck in the previous example, and that's where EventMachine deviates from the purist pattern. Specifically, it provides a mechanism by which it is also possible to dispatch the request to a pool of green Ruby threads (20 by default):

require 'rubygems'
require 'eventmachine'
require 'evma_httpserver'

class Handler  < EventMachine::Connection
  include EventMachine::HttpServer

  def process_http_request
    resp = EventMachine::DelegatedHttpResponse.new( self )

    # Block which fulfills the request
    operation = proc do
    sleep 2 # simulate a long running request

        resp.status = 200
        resp.content = "Hello World!"
    end

    # Callback block to execute once the request is fulfilled
    callback = proc do |res|
        resp.send_response
    end

    # Let the thread pool (20 Ruby threads) handle request
    EM.defer(operation, callback)
  end
end

EventMachine::run {
  EventMachine::start_server("0.0.0.0", 8081, Handler)
  puts "Listening..."
}

# Benchmarking results:
#
# > ab -c 5 -n 10 "http://127.0.0.1:8081/"
# > Concurrency Level:      5
# > Time taken for tests:   4.21405 seconds
# > Complete requests:      10

Once again, 10 requests in total, concurrency set to 5, but this time we are done in ~4 seconds! This means our new server is processing requests in parallel, much like your typical, best of breed Mongrel. Not quite the Reactor pattern EventMachine advertises, but a very powerful feature nonetheless - now you can see where all the FUD is coming from.

Deferrable: Concurrency with no threads

Borrowing heavily from Twisted, a Python event-driven network programming framework, EventMachine also includes a Deferrable module, which in some cases allows us to get all the benefits of concurrent processing without any threading overhead!The Deferrable pattern allows you to specify any number of Ruby code blocks (callbacks or errbacks) that will be executed at some future time when the status of the Deferrable object changes. How might that be useful? Well, imagine that you're implementing an HTTP server, but you need to make a call to some other server in order to fulfill a client request.

require 'rubygems'
require 'eventmachine'
require 'evma_httpserver'

class Handler  < EventMachine::Connection
  include EventMachine::HttpServer

  def process_http_request
    resp = EventMachine::DelegatedHttpResponse.new( self )

    # query our threaded server (max concurrency: 20)
    http = EM::Protocols::HttpClient.request(
          :host=>"localhost",
              :port=>8081,
              :request=>"/"
           )

    # once download is complete, send it to client
    http.callback do |r|
        resp.status = 200
        resp.content = r[:content]
        resp.send_response
    end

  end
end

EventMachine::run {
  EventMachine::start_server("0.0.0.0", 8082, Handler)
  puts "Listening..."
}

# Benchmarking results:
#
# > ab -c 20 -n 40 "http://127.0.0.1:8082/"
# > Concurrency Level:      20
# > Time taken for tests:   4.41321 seconds
# > Complete requests:      40

No threading, and yet we still finish 40 requests in ~4 seconds - the only limitation is the threaded server we built in the previous example, which maxes out at 20 concurrent requests. Magic, almost! This is the beauty of EventMachine: if you can structure your worker to defer, or block on a socket, the Reactor loop will continue processing other incoming requests. When the deferred worker is done, it generates a success message and reactor sends back the response. The sky is the limit here, no Ruby threads, no synchronous processing. For a great example of this pattern, take a close look at Dnsruby.

eventmachine.zip - Server implementations

EventMachine is a speed demon!

For more information on EventMachine make sure to check out the documentation and a few examples. Likewise, Bruce Eckel's article on Twisted is well worth a read if you're interested in learning more about the Deferred pattern. Last but not least, kudos to Francis Cianfrocca for the great framework, and his time to help me wrap my head around it! Next time you hear someone shoot down an event-driven server, please (politely) correct them!

Ilya GrigorikIlya Grigorik is a web ecosystem engineer, author of High Performance Browser Networking (O'Reilly), and Principal Engineer at Shopify — follow on Twitter.