Wednesday February 25, 2009
No, it’s not speed. That’s just icing on the cake. The real reason is more obvious than that; library compatibility.
Dispite the fact the fact that the Ruby 1.9-preview releases has been out for months, there’s quite a few gems that’s not compatible. Rails works fine though. Can you believe that? Rails is better than most of your gems.
Targeting Ruby 1.9 for your next major deployment of your application (or your next application) is a perfect excuse to do some janitorial duty and make the gems not working 1.9 compatible. Don’t just report it on a site, fit it! Unless the gem does some hardcore thread scheduling in a C-extentions, chances are it’ll take you less than half an hour to make the tests pass on 1.9.
This post pretty much cover how you’ll fix 85% of the gems not working out of the box on 1.9, you’ll even manage to do it before your coffee is ready.
I know I’ll be launching my next application on 1.9. It’s the only way to move things forward.
But I can’t shake this feeling of that something is heading down the wrong path in the Ruby community at large. Nevermind the fact that most of us went oh shi- (that sound you’d make just before the universe blows up) when Ruby 1.9.1-p0 dropped and we act all surprised when nothing works, despite the fact that it’s been a long time coming, with several preview releases along the way. What’s more important is that the art of release management is slowly diminishing in certain parts of our little community. I happen to be of the persuasion that “just download whatever the current HEAD is” isn’t a proper deployment strategy. My clients sysadmins tend to agree, shuffling the responsibility of security updates back on me if it isn’t an “official” package. And that’s cool, I can wear both hats; the developer wanting the most whizz-bang for the buck and the sysadmin wanting the most stability-bang for the buck, there’s a balance in there somewhere. But there’s still this bearded guy sitting on my shoulder telling me something is wrong.
You see, I have little interest in spending time maintaining my own personal patches because that’s not going to scale in time and that’s not why I use open source to begin with. I have no problem in trusting that to someone who does it proper; spends a little bit of time writing up a release announcement and packages it all up on an pseudo-official community site. The problem is when that person goes away. Not so much the “why”, we all move on eventually. But more the “how”? Today, there’s a certain amount of digital paperwork involved; giving away admin permissions on rubyforge/sourceforge/whateverforge, communicating that change and so on. Of course, the post-modern developer might say “just find a git clone that works”. Except that means I have to do actual work (!), instead of putting that trust into the hands of the package maintainers, I have to go hunting for updated repositories, audit the code lightly to check if they’re high on crack or not and generally waste everyones time. There’s a reason debian (and many other projects) has successfully done this over the years. Yes, you may complain about the fact that they split up the ruby packages into many. But at least you have somewhere to direct that frustration, to everyones benefit, instead of simply acting nonchalantly and maintain it yourself, but only for as long as you can keep the steam running, thus degrading the system at large.
The rubyforge gems model may not be perfect, but damnit people, when there’s a gem update I know that it has actually been tested somewhat and it’s not just whatever random point HEAD happens to be at, at that point in time, by some random Joe who just bought TextMate.