Back in April I was anxious to jump into the Google Summer of Code. I was eagerly scanning all Ruby-related proposals and waiting for Google to announce which ones were approved. As a Ruby guy, I was very disappointed by the fact that not a single Ruby-related proposal was accepted.
Despite this initial turndown, some awesomely clever people had this great idea. They wanted to follow Google’s steps but focus on our growing Ruby world. This way, the Ruby Summer of Code program was born.
Quoting from RubySOC’s page:
To continue Google’s great tradition of sponsoring open source development via summer student interns, Ruby companies, organizations and community members are getting together to fund Ruby Summer of Code. The project will work much the same way Google Summer of Code does—mentors and student interns, with mentors voting on which student projects get slots.
If you’ve been following my blog, you know what my MSc was about (I’ll blog about that later) and probably guessed what I wanted to work on during RubySOC—Rails’ performance. I promptly reached out to Yehuda Katz to debate some ideas. Carl Lerche joined us and soon the IRC channel was flooding with ideas. It would be great to improve form helpers. It would be even better to improve Active Record. Refactoring link helpers and the InstanceTag system would also have a significant outcome. Oh, and did I mention parallel partial rendering? That would be insanely cool as well.
All these ideas lost their shininess when another came up. Who are we to define what should be improved? Instead, we should have a way of determining what is fast and what is slow on real world applications.
“Benchmarking CI” was promptly born. I felt eager to start and both Yehuda and Carl liked and helped form this idea. The project was outlined, proposed and… accepted. Its summary says it all (for a lengthy version, check this out):
This project consists in building an official full-stack benchmarking suite for Ruby on Rails. Each commit will automatically trigger a process where a remote machine starts a new server, runs the tests and reports back the results. As time goes by, it will be possible to watch the evolution of the framework’s performance and developers will be able to keep track of the impact their changes have. In it’s most basic form, it will bring a kind of performance-oriented continuous integration for the Ruby on Rails framework.
Rails 3 was nearing a stable release and it would be great to have this kind of CI available as soon as possible.
I was targeting Rails 3 and while it’s good to work with cutting edge, it can also be a curse—the amount of applications which supported this version was minimal. Despite this difficulty, I needed something to work with.
At first, I thought about using an artificial application. Yehuda and Jeremy Kemper soon stepped in and helped me realize how nonsense that was—I was building a reliable platform for real-world applications and, consequently, measuring Rails’ performance using an artificial applications blown away that purpose. I needed something real.
If I needed to port something from Rails 2 to Rails 3, it had to be big. Something a lot of people used. One of Rails’ most popular applications. Already guessed? You’re right—Redmine. The time to code had arrived, as I was picking a somewhat big application and making it compatible with Rails 3.
A few weeks after and it was done. Redmine was mostly compatible with Rails 3 RC 1.
Remember railsbench? It was awesome back in Ruby 1.8. I needed something similar for Ruby 1.9. This way, the gcdata patches for YARV were created. Rails was also changed to accommodate these patches. After applying those patches, one could reliably benchmark and profile Rails applications on Ruby 1.9.
Ruby-prof was also enhanced—support for Ruby 1.9 was improved and two additions were made: an awesome HTML hierarchical printer created by Stefan Kaes was added, as well as a YAML-based printer for automated processing.
Ad-hoc test applications and dummy
One of the main purposes of the benchmarking CI was the ability to add and remove test applications on the fly, completely effortlessly. For this to happen, a couple things were needed:
- Test data must be automatically generated (preferably in a smart way)
- Performance tests based on the applications’ available routes should be automatically generated
Dummy was born. The whole package includes a Ruby library and 3 Rails generators:
Inspired by faker, dummy is a dummy data generator. Quoting its description:
Dummy can generate a lot of dummy data from company names to postal codes. While it allows you to specifically request a type of information, it can also try to determine what you’re looking for given a couple of parameters.
The rest of the package (dummy_*) are Rails generators which use dummy to generate test data, routes and performance tests for Rails applications automatically. Have a look at their github pages for in-depth information on how to use them.
Automation and visualization
Everything was ready—a renowned application running on Rails 3, YARV and Rails’s profiling tools enhanced and communicating, ruby-prof improved with suitable printers and test data/routes/performance tests being automatically generated.
What’s next? Building an applications which manages test applications, triggers performance benchmarks, stores and analysis the results, presents them in a meaningful way and notifies the responsible developer(s) for any performance regressions. All of this by harnessing the previously improved/developed tools.
It was also done. While being mostly complete, it is not ready for prime time just yet. This leads me to the next section.
For this project to be fully complete, a few things still need to be done:
- Improve the applications’ performance when analyzing the results
- Notify the developer(s) responsible for the regression(s)
- Improve application’s design (and add charts)
- Make Redmine compatible with Rails 3 again—it lost its compatibility around Rails 3 RC 3.
Other tweaks that could be handy but aren’t critical for a final release:
- Increase ruby-prof’s precision (which sits at 2 decimal places, though I’d like to have 4 or 5 to work with)
- Add support for non-RESTful routes in dummy_routes
What happens now
The Ruby Summer of code was amazing. I was given the opportunity to work with a subject I love around very clever folks from the Ruby land. I enjoyed every tiny bit of it and only regret having to finish my MSc thesis during the program, which took some precious time I had to invent elsewhere.
Sadly, RubySOC is over. My next goal is to bridge the gaps I enumerated above. After that, I’m aiming at tweaking this even further to be suitable for everyone, not just Rails itself, so that you can have a benchmarking CI in your Rails application. I also want you to be able to benchmark it in Rubinius and JRuby, not just MRI and YARV. It’d be a nice Christmas gift, wouldn’t it? Well, I’m hoping to release it sooner than that.
Before ending this long post, I’d like to make some brief acknowledgements. First of all, to RubySOC’s sponsors—you all made this possible. To the entire Engine Yard team behind this—especially Leah Silber—for coordinating all of this on your free time. To the Ruby community for being so helpful and motivating. At last but not least, to my awesome mentor—Yehuda Katz, one of the craziest guys I know (at least through IM), for guiding me and making it fun throughout the whole project.
My “giving back” to the Ruby community does not end here. What can I say? I loved this experience and I want more of it. Keep an eye out at my github page for up-to-date news on this subject!
EDIT: As cleverly pointed out by Myron Marston in the comments, “dummy_routes” is not the best name to describe the gem. It was changed to dummy_urls to improve the name’s meaning and generate less confusion about what it does.