Tuesday, May 29, 2012

Faster RSpec on JRuby with guard-jruby-rspec

The biggest complaint I hear about JRuby is how long it takes to run tests or specs.  I feel your pain.  That's why I started hacking on guard-jruby-rspec.


This guard extention allows you to run all of your specs on JRuby without the initial start up cost.  It does not run a subset of your specs like guard-rspec (yet) and it does not trigger a run when a file changes (yet).  Instead, this extension loads all of your application files in advance, and reloads them individually when they change.  That way, when you run RSpec, the JVM is already running, and your files have already been required.

Here's a short video of me using it:



There's still a lot to do.  For instance:
  • Autorun specs like guard-rspec (want to integrate with guard-rspec so as to not duplicate all of it's logic).
  • Allow for extra rspec options
  • Fix the way guard uses stdin so its not flaky on JRuby
  • Work out the kinks in gj-rspec script so that specs can be run in main terminal.
More to come...

Friday, May 4, 2012

Zero-Downtime Deploys with JRuby


One of the most common questions I get from readers of my book is about zero-downtime deployment. That is, how do you deploy new versions of a JRuby web application without missing users' requests?

To answer this question, let's first look at how MRI-based deployments handle zero-downtime.  When a process running an MRI web server needs to load a new version, we shut it down, push the new code, and start it up again.


This leaves a gap where no requests can be handled.  But most MRI deployments use a pool of application processes, which provides a nice way around this problem.  While one process is reloading, we can rely on the other processes to service requests.  The result is a "rolling restart" in which the re-deployment of each process is staggered.


In practice this is a difficult dance to coordinate.  Technologies like Passenger make it a lot easier, but under the covers it's still complicated.

JRuby deployments are different, though.  Instead of having a pool of processes, we deploy our applications to a single JRuby server process, which never gets shutdown (ideally).  The result is that our deployment has just two steps: undeploy and deploy.


However, this still leaves a gap where requests can be dropped, and we don't have other server processes that can take over while we're updating.  To fix this, we simply need to reverse the order of the steps!

A zero-downtime JRuby deployment requires that we fully deploy a new version of the application before we undeploy the old version.  Thus, we will have two version of the app running at the same time, but only one will handle requests.

The good news is that Trinidad essentially does this for us.  All we have to do is redeploy our application. It works because deep within the bowels of Trinidad is a method that looks like this:



In the takeover method, Trinidad is creating a new context for the next version of the application while the old version continues to run.  Then it swaps those contexts in one step.  The result is effectively zero-downtime deployment.

Unfortunately, not all JRuby web servers do this for us, so we may have to script the process ourselves.  Let's take TorqueBox for example.  When we deploy a new version of a TorqueBox application to a running TorqueBox server, it completely undeploys the app before loading the new version.

Getting around this is pretty easy when TorqueBox is running in a cluster (i.e. multiple TorqueBox instances across mutiple physical or virtual servers).  We simply need to deploy a new version of the application to one node at a time.  When the old version is undeployed, the Apache mod_cluster proxy will stop sending it requests.

If you're really paranoid, you can manually disable a node prior to deploying the new version of your application by invoking the disable() operation on the server's jboss.as.modcluter MBean.  The screen shot below shows me doing this from the JMX console.



In my book, I show how to invoke an MBean operation programmatically from a Rake task. That way, you can easily work this step into your deployment scripts.

If you're not running TorqueBox in a cluster, the process is a little more complicated.  Rather than just dropping your Knob file into the deployment directory or relying on Capistrano to create a deployment descriptor, you'll need to create a custom deployment descriptor for each new version of your application.  An example might look like this:



When the YAML file is dropped into the $JBOSS_HOME/standalone/deployments directory, it will deploy the new version of the application under the myapp-v2/ context without undeploying the old version of the application (assuming it is not also using the myapp-v1/ context).  Then you need to configure your proxy to point to myapp-v2/ instead of myapp-v1/.  The resulting process looks like this:



In my experience, if you really care about zero-downtime deployment, then you are probably running a redundant cluster anyways.  So the need to orchestrate the context switching on a single node is unusual.

In any case, it's certainly possible to achieve zero-downtime deployment with JRuby.  And in most cases, it's a lot easier than with MRI.

Thursday, March 29, 2012

Clustering TorqueBox

I've create a new screencast to go with the second Beta release of my book.

In this video, I demonstrate how TorqueBox scheduled jobs can be run in a cluster without duplicating the job across nodes.



To run the examples I've shown in the video, you'll need to install the torquebox-server gem to your JRuby runtime:


Here are the commands and code I run in the video:






Thursday, March 15, 2012

Talkin' about JRuby

I gave a talk today at hsv.rb on the subject of my book. The video, slides and code samples are below. Enjoy!



Deploying with JRuby
View more presentations from jkutner


Saturday, March 10, 2012

Deploying with Trinidad on Heroku

Heroku is an excellent platform for JRuby deployment, but it only provides indirect support for JRuby through it's Java stack. As a result, deployment on Heroku requires some XML, a little knowledge of Maven, and bunch of hacky-feeling tricks. That's why I chose not to include a detailed discussion of it in my book. Instead, I thought it would make a nice blog post.

I've created the simplest possible Rack application that can be deployed on Heroku with Trinidad and JRuby <https://github.com/jkutner/jruby-trinidad-heroku>. The key elements are these:
  • pom.xml - this is a Maven config file that can be copied as is (in most cases).
  • Jemfile - a renamed Gemfile so that Heroku doesn't think this is an MRI app.
  • Procfile - a Heroku config file with instructions for running the app.
  • script/jruby - a modified jruby executable for the Heroku platform.
  • config.ru - the application itself (can be replaced with any Rack app)
Once your application has these components, you'll need to create the Heroku application by running this command from the application root:


The first thing you'll notice is that we are creating an app on the Heroku Cedar stack. This provides the Java platform support. We're also using a buildpack takes care of setting up the essentials for any Java application on Heroku (most importantly is making sure the Maven dependencies are available at runtime).

Then you can deploy with this command:


There is a slightly out-dated post by Chris Howie on deploying Sinatra on a Trinidad, JRuby, Heroku stack. There is also an example application called Punchy that has been ported to Heroku by Mathew Rodley. Heroku has changed it's configuration since these were created, but the basics hold true. They are worth reading.

In addition to Heroku's lack of native JRuby support, it also lacks native support for Trinidad. So it's unclear how many of the Trinidad extensions will function. This is unfortunately because the Trinidad extension are a big part of why this platform is superior to almost every MRI-based server.

In the future, I will investigate Trinidad extensions on Heroku and follow up with another post.


Welcome!

This is the official blog for the book Deploying with JRuby by Joe Kutner. I'll be making regular posts here that address topics associated with the book, but not necessarily covered in the book.

The most common questions I've received about the book are: "does it cover Heroku?" and "does it cover Google AppEngine?" I've chosen not to provide detailed coverage of these platforms in the book, but they are still excellent choices for JRuby deployment. As a result, I hope to post some tutorials of how to use them on this blog.

I'll also be posting updates on new platforms, and some less mainstream server technologies that can be used for deploying JRuby.

Stay tuned for more!