julik live

Why I am still using Jeweler

Rolling Ruby gems is an artistic affair. Fashions change. Sometime ago it was considered fine to edit your gemspec.rb by hand. Somewhat later a trend of using Hoe emerged. A little while later Bundler came into the picture, and it had a command as well to generate gems. Wind a few more years - and the fashion is now to roll with gem new.

The funny thing about it is that multiple people tend to jump on it and migrate their projects from one gem setup/maintenance system to another. It is just like with haute-couture or testing frameworks -- it is cool as long as you are using the latest one in fashion at the moment. Get distracted for just a couple of months - and you are no longer the fashion club, but a retrograde old bureaucrat stuck in his old ways of doing things.

However, not many people focus on what is way more important in the story of making Ruby gems and makine them actually shine - the stability. Let's go to an example. With my modest portfolio I clock over 20 gems in my decade of doing Ruby for fun and profit. Repeatability is way more important for me in this process than any fashions currently being flung around. In my view, for me as a maintainer of a whole salvo of gems, I need to have a few very simple conditions that are met by whatever tool I use for rolling gems:

  • There should be a command I can easily memorise to initialise a blank slate gem, and it should have the right setup in place.
  • All tests/specs for the gem should run with $bundle exec rake, no exceptions.
  • I should be able to do a release, including tagging and pushing it to the gem server, with $bundle exec rake release
  • I should not have to edit any files except of the changelog/version to roll such a release. Simple git history / gitignore is sufficient.
  • The gem set up with that tool of choice has to be runnable by Travis, and for most of my gems I still support Ruby 1.8.7

Having this process helps me by reducing the friction to release gems. When I want to release a library, absolutely the last thing I want to be worried about is how to streamline the workflow of doing those things. Simply because if I do, each new gem that I release or update is going to obtain a release process of it's own - it's just like building plugins for expensive post-production applications all over again. Every new version you roll, for any new version of it's dependencies, becomes a unique snowflake - one has a Gemfile, another one is managed by a manual gemspec.rb, yet another one assembles itself from git... and it goes on and on until you have to actually check what kind of release pipeline du jour has been in effect when you were last twiddling with a certain gem.

The longevity of many of my projects - tracksperanto is no exception with a running history of regular updates over it's 5 year existence - also owes to a stable release pipeline.

I've only went through the changes in the gem release pipeline twice. First switch was from manual gemspec.rb editing and a hodgepodge of Rake tasks to release to Rubyforge of yore. The second one was from that to Hoe. The third one was from Hoe to Jeweler, because I was unable to make Hoe function on Travis with the vast array of Ruby versions I wanted and because I got fed up with the manual MANIFEST file, where I always forgot to add a file or two - whereby a release often went out without an essential part being included.

So far Jeweler, with all of it's possible problems and limitations and an extra dependency, given me the most precious thing in the gem release/maintenance process - and that is that I don't have to think about that process. For any single gem that I maintain, I know that brake followed by brake release is going to do the update I am after - and I can concentrate on the code that my library is offering instead of the fashions of the build pipeline.

That is way more important, and way more precious to me than knowing that I am following the latest trend in a volatile universe of open-source. I am redy to pay the price of being called old-fashioned and having extra 10 lines in my Rakefile, along with a dependency practically none of my users will even download. By corollary, it means that your pull request to my projects proposing to remove Jeweler and go about some more bikeshedding is likely to be rejected. Not because I am a jerk, but because it supports a repeatable process I have developed muscle memory for, and changing that muscle memory is the last item on my priority list.

And I suggest you, dear reader, to do the same - pick a rubygem release/bootstrapping process that works for you, verify it, trust it and stick to it, instead of joining the bikeshedding fest - whatever that process might be. What your actual gem does is way more important than what you are using to roll it.

Suspects: Веб-стройка

On the benefit of laptop stands

When you look at pictures from trendy startup offices you often see laptop riser stands.

One might think that you would do that to make your desk look neater. Or that it is for better posture. Or just to have yet another neat Apple-y looking apparatus in your vicinity for more hipster look. However, there is a benefit to laptop riser stands that some do not quite see upfront.

However, there's more to it than meets the eye.

More on this...

Suspects: Mac

Checking the real HTTP headers with curl

curl, the seminal swiss army knife of HTTP requests, is quite good at many things. Practically everybody knows that you can show headers using the -I flag:

$curl -I http://example.com/req

However, this is plain wrong in a subtle way. See, I sends a HEAD request, which is sometimes used to probe the server for the last modification date and such. However, most of the time you want to check a real GET as opposed to a HEAD. Also, not all web frameworks will automatically implement a HEAD responder for you in addition to a GET responder. It's also downright misleading because with quite a few proxies the headers you are going to be getting from the server will be different for a GET as opposed to a HEAD.

To perform a "real" GET, hit curl with a lowercase i as opposed to upprecase.

$curl -i http://logik-matchbook.org/shader/Colourmatrix.png

However, this will pollute your terminal with horrible binary-encoded strings (which is normal for a PNG after all)... There are ways to do a full GET and only show a header, the easiest being doing this:

$curl -s -D - http://logik-matchbook.org/shader/Colourmatrix.png -o /dev/null

Works a treat but is long and not memorizable. I put it in my .profile as headercheck:

alias headercheck='curl -s -D - $1 -o /dev/null'

So anytime you want to check headers with a real GET request, hit:

$headercheck http://url.that/you-want-to-check

and you are off to the races.

Suspects: Веб-стройка

OWC Data Doubler caveat with 10.9 Mavericks

Shoved a spanky Samsung SSD into my MBP using a Data Doubler from OWC - easy to confuse with the DiskDoubler of yore, by the way.

So, pried open the laptop, put in the SSD into the doubler plate, screwed everything back in. Turns out due to somewhat inferior SATA port on my particular MacBook Pro model the disk would never work properly in the first place. The Mavericks installer just stopped with the "Installation failed" message, leaving the partition for DiskUtility to repair.

The fix is surprisingly head-on. Apparently the SATA port used by the optical drive is not so good after all, you got to have your 6G SSD on your primary SATA, where the old hard drive is. Just swap the old HD with the new SSD, so that you have a SATA 3G device in your doubler plate, on the not-so-good SATA port destined for the optical drive.

So off goes the lid again, the SSD and the old hard drive swap places and the Mavericks install then proceeded as it should, rewarding me with a pristine desktop.

SSD bliss.

Suspects: Mac

Tracksperanto is fully Ruby 2.0 ready

Just added Ruby 2.0.0 to Tracksperanto's Travis config and the results are in. Our compatibility matrix now looks like this:

all green

So now we also happily run 2.0.0 in addition to 1.8.7 and 1.9.x. Feels nice to stay compatible and current at the same time.

This also means that likely most of the Tracksperanto dependencies are also compatible with 2.0.0.

For modern development Javascript indeed is a s̶h̶i̶t̶ dissapointing language

UPDATE: A much nicer JS bile from Armin Ronacher, where I subscribe to every word except for the Angular parts (since I skipped Angular entirely, because I am not smart enough to understand it).

I'm sorry, but the Crockford arguments do not cut it.

Javascript is so bad, on so many levels - it's not even funny. This is why I am so surprised everyone jumped on the node bandwagon with such excitement - yes, Node is faster than Ruby, but it's unfathomable to me that someone in his clear mind would want to rewrite his app in Node without being 100% focused on the evented model.

JS has inherent, deep issues. They are not solvable by making a new ECMA spec. They are not solvable by wrapping a better syntax around it like Coffeescript does. They are not solvable by standardizing on a require implementation or by introducing classes. There is an ECMA language with classes - it's called ActionScript and it's just as shitty as JS itself. These warts just are - and as long as the masses accept it as the status quo it's going to be exactly like the PHP framework landscape to this day: everyone and their mother will be spending man-years trying to create an infrastructure of tools around a shit language that will resist those efforts every single second.

Let me explain why I say that JS is awful. Of course, there are nice things in it - but the problem is that their utility is disputable. Prototypal inheritance, for example, is severly limited in utility - because all it offers you are function overrides. The "everything is a function" approach, while also gimmicky (look ma, I can also call this!), is also not particularly useful - because a function is not an object, not a datastructure that can carry data.

And then the real warts begin. Let's simply enumerate:

JS has callable attributes

This is a shitty design decision which most of the languages have made right at the start. In retrospect, it's difficult to blame the designers because they might have had performance issues - and, to boot, if you are not used to message-passing language the whole idea of "some attributes are callable and some are not" seems absolutely legal.

Hobjects are unusable for stable keys

The mixup between objects and hashes is also a very bad idea, because it defies the premise that objects can have metadata on them - which alllow you to establish a rudimentary type system or at least any kind of introspection.

Fobjects are unusable for type systems since an object does not carry any type information.

This one is a biggie. Even in the Ruby world, where everything is happily quacking like a duck, we often use the Object#class to get information about an object. A fairly standard procedure of styling an HTML element to a certain model object for example:

    <div class='<%= model.class %>' id='<%= [model.class, model.id].join %>' >…

is impossible in JS because the only types offered are 'Object', 'function' and primitives. It's awful in all the ways Java is awful, and then some.

Null everywhere

Trying to use a constant with a wrong name by mistake?

 MyApp.SYNC // should have been MyApp.SYNC_FETCH

Nothing will happen. Since ojects are hashes, and the language provides zero facilities for constants, our constant with the wrong key will be undefined, and will happily bleed into the callee. This makes stack traces huge.

Callback hell

JS lacks a decent facility for deferreds. It's created for evented execution, which is fundamentally not multithreaded. Your calls are interspersed with event callbacks - when your code is idling, callbacks are executed. However, JS lacks a simple facility for doing this:

 var res = await AjaxReq.fetch('/long-request')
 // because you are waiting for a result, here the runtime would
  // schedule event handling, DOM redraws and whatever else it can 
  // squeeze in while you await
 res.name // this will be only executed once res is available

Of course the JS community is doing exactly what the PHP community has been doing all along - they try to fix a bad language with even worser tooling. How? By using more callbacks and, on a good day, callback chains

when(<ERMAGHERD RIDICULOUSLY LONG CALLBACK>
 // 48 lines of code down
).then(<HOLYSHIT WHEN WILL THIS BE OVER>
// 23 lines down
).then(<GIVE ME SOME COFFEE ALREADY>)

In a normal situation this would have been fixed simply by adding a wait primitive to the language which would schedule the events when the result is still being fetched.

The proliferation of callbacks leads to the programming style where everything is async, but as a matter of fact 80 percent of the code you write has to be synchronuous. Simply because 80 percent of the programming is about doing one motherfucking thing with another motherfucking thing and you need both of them to complete the action.

Terrible exception handling

Exception handling in JS is terrible. It exists, but in a rudimentary form - you can see the call stack (that's going to consist of a dozen anonymous functions and one function that is named - that on a good day), and you can see an error message. Since I am not bashing on the DOM, I will only mention two most often encountered errors:

    undefined is not a function
    cannot call property 'xyz' of undefined

Both of these stem from the fact that fu(ck)bjects in JS have no defined methods - they only have properties. The JS runtime will never have a way to know whether your fubject is supposed to have a method that can be called, or a property of a certain name - it will just assume it being a missing hash key. It just doesn't know any better!

I remember people in the Ruby community complaining about Ruby's backtraces and error messages being not good enough - and Rubinius went to address this. You know where error messages are particularly fucked up? In fucking Javascript. Because the two absolutely basic, crucial exceptions that you want to get and see every single time - NameError (when you are adressing a class-ish or constant-ish something which is not defined) and 'NoMethodError' are simply impossible with the sloppy way the language is built.

And yes, functions are nice, and prototypes are nice and all that - but if you want to build a JS app having any kind of reasonable complexity, you will be bound to write code like this:

var cv = Marionette.CollectionView.extend({
  itemView: MyApp.Views.WidgetView;
});

What is the error that you will get if MyApp.Views.WidgetView is not defined yet? undefined is not a function of course! Where will you get it? When the CollectionView will try to instantiate your item view. Not when you define the variable cv, no no! It will explode later, and rest assured - you will be tearing your hair out for a couple of minutes until you see where the error came from.

And why? Simply because everything is a hash and the language is incapable of doing any kind of introspection.

It absolutely perplexes me that people who have used Ruby are moving to Node and calling it a good tool. Node might be great. The language running in it is shit though, and until this gets at least marginally better I will do without Node just fine, thank you.

I can understand that some people wanted to escape the MRI infrastructure by going Node, because - you know - learning Japanese is hard. If you don't speak Japanese, your chances of making a noticeable improvement in MRI approach zero - you will not be treated nicely.

JS is shit, and if we care at least a tiny bit we should do everything in our power to either sunset it, or to move it into the 'assembler for the web' realm where it would be a vehicle for decent languages that can actually get the job done without driving you up the wall. Being nice will not help it, and CoffeeScript is not radical enough. Support your local transpiler initiative today.

Update: nice to know that some people are considering alternatives

Suspects: Веб-стройка

Checklist for custom form controls in your wep app

Recently I had a privilege of reviewing a web app that is directly relevant to my work field. It is in fact an iteration on the system we are actively using at the company. After having a poke here and there, I was surprised to find that the custom controls epidemic was not over for some people.

All popup menus on the system (all select elements) were implemented as custom HTML controls with custom event handling. I hate this kind of thing not because it doesn't look like a native control - this is in fact secondary. Most of the apps I use daily (Flame, Smoke, Syntheyes, Nuke) do not use the native UI controls at all.

What I hate is a substantial reduction in useful behavior compared to a native control. There is a load of stuff humanity has put into menu implementations for the past 40 years. Every custom select implementation is bound to be reinventing the wheel, in a bad way.

Imagine you are all giggly and want to make a custom select element - a menu. Or your boss does not get UX and lives in the art-directorial LSD fuckfest of the late nineties, and absolutely requires a custom control. Ok then, roll up your sleeves.

More on this...

Suspects: Веб-стройка

Running .command files on OSX in other interpreters than sh

We all know that on OSX you can create so-called .command files. In a nutshell, they are renamed shell scripts. When you make them executable and double-click them, they will run by themselves within Terminal.app.

A little less known fact about them is that you can actually script them in any language of your choosing. For the reasons of distribution it's better to stick to the versions of things available on OS X by default, obviously. You do this by modifying the shebang, just like you would for any other shell script.

For example, a .command file that runs via Ruby would look like this:

#!/usr/bin/ruby
puts "Hello from #{RUBY_VERSION}"

Note that I am not using /urs/bin/env here to get at the right version of Ruby since that turned out to play up on 10.6 somehow. I stick to the system ruby instead.

Suspects: Mac

Gracefully handling NFS mounts on OSX laptops

For the last few years, all the work I do has been equally split between Flame systems running on Linux and a couple of Macbook Pro's running OSX. At Hectic we make use of NFS to make the same servers available to all of our client workstations which are an equal mix between Linux and Windows (Macs do not ouse our NFS facilities much). One of the problems I have encountered has been configuring the NFS mounts on my laptops for graceful timeout.

Now, in the ideal world NFS is designed to handle unmounts gracefully of course. That is, the client is supposed to suspend or fail on IO operations when the requisite mount is not found. Apple, however, in it's infinite wisdom, designed a slightly different system for it's OSX Server infrastructure. The mounts that you were used to designate in the Directory Services application or, more recently, in Disk Utility (explained in tutorials like this one are managed by automountd, Apple's daemon controlling the directory mounts. This is the daemon that originally was designed to provide automounted user home directories and other handy things in the /Network folder on the root drive. In Mountain Lion this feature has been removed, but people try to use automountd nevertheless, explained in a post here.

However, automountd has been designed for limited applications - like computer labs at colleges and universities. It is not capable of detecting offline servers or stale mounts, and even with all the settings tweaked it will never timeout on an NFS mount. In practice, this means that if I have some NFS mounts defined on my laptop and I take the laptop somewhere where the NFS servers cannot be reached, the following will happen:

  • All navigation services dialogs in all applications will beachball when trying to access the stale mounts
  • All applications having documents open off of these servers will beachball
  • Due to Lion and Mountain Lion's automatic document reopening of last-used documents per application apps that support this feature will beachball again on startup.

This is not pretty. What you actually want is a nice dialog like this:

Conn int

when the mounts are gone and then to be able to proceed with your business.

And this turned out to be remarkably simple to achieve. The easy solution is - do not use automountd at all, but mount manually. I do it with a Ruby script that I need to run in the morning once my laptop is up and running on the company network. When I come home, the OS falls back to the natural behavior of simply unmounting the stale shares instead of having automountd hammer on them indefinitely.

Suspects: Mac

Messages versus slots - two OOP paradigms

I've had this discussion with Oleg before, but this still keeps coming up again and again. See, alot of people are pissed at Ruby that you cannot do this

something = lambda do
  # execute things
end

something() # won't work

and also cannot do this

meth = some_object.method(:length)
meth(with_argument) # call the meth

However this comes from the fact that Ruby is a message-passing language as opposed to a slot language. What I tend to call a slot language is something that assumes objects are nothing more than glorified hash tables. The keys then become either instance variables or, if they store a function, "callables" (the way Python calls it).

So you might have an object Car that has:

 [ Car ] -> [weight, price, drive()]

all on the same level of the namespace. Languages that operate on the "slots" paradigm usually have the following properties:

  • It is very easy to rebind a method to another object - just copy the value of the slot
  • You can iterate over both ivars and methods in the same flow
  • Encapsulation is a decoration since everything in fact still is in the same table
  • You can call variables directly since local namespace is also a glorified hashmap with keys for variable names and concent that iscallable.

Due to various reasons I dislike this approach. First of all, I tend to look at objects as actors that receive messages. That is, the number of internal variables stored within an object should not be visible from the outside, at least not in a formal way. Imagine a way to figure out the length of a string in a slot language.

str.len # is it a magic variable?
str.len() # is it a method?
# or do we need a shitty workaround which wil call some kind of __length__?
len(str) # WTF does that do??

There is ambiguity whether the value is callable or not, and this ambiguity cannot be resolved automatically because this expression is not specific in a slot language:

# will it be a function ref?
# or will it be the length itself?
m = str.len

and therefore languages like Python will require explicit calling all the time. This might be entering parens, or doing

m.do_stuff.call()

Most OOP (or semi-OOP) systems known to us in the classic sence (even CLOS as far as I know) are slot systems - IO, Python, Javascript are all slot-based. This gives one substantial advantage: not having to specify that you want to move a block of code as a value explicitly. So you can do, for example in JS

 // boom! we transplanted a method
 someVar.smartMethod = another.otherMethod;

All of the rest are actually inelegant kludges. First of all, since a method can be used like a value for moving code from place to place, you always need explicit calling. Second, your slots in the object do not distinguish between values and methods so you have to additionally question every value in the slot table whether it is a method or a variable. Also, in slot languages you need to specify that an instance variable is private or not (since normally everything is in one big hashtable anyway).

On the other side we have message-passing languages, like Smalltalk and Ruby. In there anything you retrieve from an object passes through a method call whether you want it or not, because there are two namespaces - one for ivars and the other for methods.

You know that ivars from the outside of the objects are off-limits, and you know that everything exposed to the outside world iscallable, by definition. You also get the following benefits:

  • everything only ever goes through getters and setters, so you just don't have to think about them all the time
  • you tend to avoid using objects as glorified hashmaps
  • you get alot of smart syntax shortcuts
  • refactoring a property into a getter/setter pair is a non-issue to the consumer code

Message-passing languages also adhere to the UAP.

When implementing your next programming language, first make it a message-passing system, and if you need performance improvements make internal opaque shortcuts to bypass the getter/setter infrastructure when direct properties are accessed. You will spare alot of people alot of useless guessing and typing.

Suspects: Веб-стройка

Aspirine not included.