julik live

The sad state of human arrogance today

Lately an article has been making rounds on HackerNews where eevee laments on his tribulations when installing Discourse.

I've read it cover to cover, and I must admit it did strike a few notes with me, especially because I write Ruby web apps for a good decade now, with Rails and without. Most of the points named in this article are valid to some extent, but I profoundly disagree with the overall tone and the message.

First let's look at the narrative:

  • Ruby is not a primary language eevee is faimilar with (he's primarily a Python/Rust guy from what I could see - because I have been reading his other articles just a few days prior).
  • He does not have a lot of experience deploying a modern Ruby stack in production for web apps
  • Through his experience with Python he probably misses the fact that the Python deployment story is probably just as painful as Ruby's at this point. Moreover, some horrible relics of Ruby packaging (gemsets) still exist at large in the Python world (virtualenv).
  • He picked a project which is explicitly pushing the envelope in it's usage of fancy tools, and thus indeed wants to have it's mother and the kitchen sink for dependencies. This is not because you need fancy tools, but because for a modern web app you do need search, you do need a job queue, you do need push messaging.
  • Exactly because the developers of Discourse (whom I admire greatly) realise that the dependency story in Discourse is effin hard they suggest, loudly and clearly, to deploy it via images or containers. Eevee chose to use neither of these approaches, but facing the consequences of this decision proved to be a world of pain (exactly as predicted).
  • He has a complex Linux configuration, which to me (from my uneducated perspective at least) looks like a Linux build that has been accrued over the years with all sorts of manual tweaks (90% of them probably having to do with X11 and sound of course), and migrated over and over - as a result of which you indeed end up with a 32 bit runtime on top of a 64 bit kernel. This, for tools that assume a default configuration for most of situations, is a recipe for butthurt.
  • He also had to use a PostgreSQL extension, which does not ship as a default part of the package.

Instead of raising my hands in violent agreement with him, or rebutting his post point by point, I would like to look at the actual reasons why this stuff happens.

More on this...

Quitting VFX, one year on

So that might be a surprize to some people reading this – if someone still reads this blog at all. But around Christmas 2013 I have decided that I had enough. Enough of the being pleasant. Enough of the wonderful Aeron chairs and enough of the Flame. I was just sick to my stomach.

Throughout my life I have been one of those blessed few who never had to work at a place they hated. An essential benchmark of the quality of life for me is that I wake up in the morning and feel like going to work. When that feeling is gone, and the fact of going to work becomes dreadful instead -- that is the clearest indicator for me that it is time to move on.

This in fact turns a major page in my life. I have dedicated 9 years of it to the Flame, just like I wanted to. I have seen the demise of the SGI Tezros (I will forever remember what $hinv stands for and what the Vulcan Death Grip is). I have done some great compositing work, from set extensions to beauty to morphing. I have worked with some of the greatest, most wonderful art directors in Amsterdam – it did turn out that most of them have grey hair, a beard, a belly and an impeccable sense of humor - and, first and foremost, artistic vision.

I worked at an amazing company which gave me so much I will be forever grateful. Of course there were ups and downs, but I was incredibly lucky when I stumbled into it by accident in my search for an internship.

I have enjoyed some very interesting projects, mastered Shake and Nuke, became one of the best matchmovers in town. Have beta-tested new versions of Flame and had the pleasure of meeting the wonderful, predominantly-Francophone team of it's makers. I've met a mentor who transformed me from a rookie into a good all-around Flame op. I got pretty much everything I wanted.

For a month I even lived at the office, when the situation with my landlady became unbearable. I worked my year as the night shift Flame operator - so that chevron is earned too.

The logical next step would be London. The problem with that is - I was not prepared to dump my residence status in the EU just for the privilege of working on the next fab rendition of Transformers IX. You see, most people in Western countries are equal, but people with less fortunate passports are somewhat less equal than others. So if I were to move on to the UK it would mean a new alien status, a new residence permit, a new work permit, and all the other pleasures such moves entail.

Also I got tired of client work and all of it's facets that tend to drive you to exhaustion. It was extremely challenging and very rewarding for me - especially me being an overall introvert that I always was - but at a certain point I've started losing the grip. The guy that once was Julik started to become some other person. Some other person that I didn't like seeing in the mirror when I brushed my teeth. Some person that had to develop reactions to the outside stimuli that I did not condone. In short - the beautiful honeypot I was so eager to devour started to consume me.

So after having dedicated 9 years to visual effects and Flame, it was time to turn the page. Since I was developing pretty much all those 10 years (I started about a year prior to this blog being first put online) becoming a Ruby developer seemed par for the course. I love programming, I love Ruby, and I've made some great stuff using it. Tracksperanto has become the de-facto go-to piece of kit for many, and even though I never got code contributions for it – most of the post bunch speaks Python only – I was able to maintain it with excellent longevity and it saved many a matchmove project, both for myself and for many others.

Work/life balance

This has more to do with company culture, but in the past years I've learned that when you process yourself into a skewed work/life balance you have to understand the benefits and the costs. Essentially, when doing all-nighters, participating in sprints, crunch times, enabling your mobile phone at night - think very well what you are giving up, who benefits from it and whether the compensation is adequate. Overtime is not free - you are paying for it with your health, with your family life, with your friendships and love and care and affection. Being on call is not free.

You absolutely should do it when doing the work you love - but consider whether what you are getting for it. When in doubt, re-read Loyalty and Layoffs.

The game here is detecting when the overages and pressures stop being of benefit to you, and only stay beneficial to your employer or client. It is an extremely fine line, and most good managers are good at masking it from view. It took me a full 15 years of working in different roles to get the feeling for when enough is enough, and even at that I had to switch careers to restore the balance.

Don't think that development is foreign to crunchtime (read some jwz if in doubt) but it is highly likely that you will be confronted with it sooner in a VFX career. Remember - how much of it you will take, and what for - is your decision. Nobody - not your employer, not your manager, not your parents - are responsible for the decisions you make about it.

More on this...

Matchmover tip: obtaining the actual field of view for any lens using a survey solve

The best camera is the one that's with you Henri Cartier-Bresson

For a long time we have used a great technique at HecticElectric for computing the lens field of view (FOV) values. When you go to film something, you usually record the millimeter values of the focal length of the lenses you use (or "whereabouts" if using zooms instead of primes). This approach, however, is prone to error - because 3D software thinks in terms of abstract field of view angle, not in terms of the combination of the particular focal length + particular sensor/film size.

So we devised a scheme to reliably compute the field of view from shots of specifically chosen and prepped objects, that we call shoeboxes. This yields very accurate FOV values for any lens (including zooms at specific settings) and can be used with any camera/lens combination, including an iPhone.

More on this...

Building Nuke plugins on the Mac, Christmas 2014 edition

Building Nuke plugins on the Mac, Christmas 2014 edition

As 2014 is folding to a fanfare close, we here at julik live would like to take you, dear reader, to 2009. To recap what happened in 2009:

  • The Icelandic government and banking system collapse
  • Albania and Croatia are admitted to the North Atlantic Treaty Organization (NATO).
  • The Treaty of Lisbon comes into force.
  • Large Hadron Collider gets reactivated.
  • Apple Computer Inc. releases OSX 10.6 Snow Leopard

That last fact is of uttermost importance for us. We are going to time-travel to 2009 to see how life is like in a world where everybody uses systems so old they cannot be obtained legally. To the world of high-end VFX software.

See, I made a Nuke plugin back in the day. Because I really wanted to and I needed it. As faith would have it, I also really wanted to share it because it could be useful for many other VFX artists using SynthEyes -- which I still consider the greatest, fastest and the most affordable 3D tracking package ever.

However that plugin development thing puts me way out of my comfort zone. See, this is C++ we are talking about - but if only. We are also smack in the middle of the DLL hell because, as you might imagine, a plugin for someone else's software is a DLL that you have to build (or a .dylib, or a .so - a shared library for short). Now, 5 years have passed and I somehow managed to maintain a build pipeline for the plugins intact - for quite a long while for example I specifically delayed upgrading from 10.5 just to avoid everything that is related to this dependency hell. Dependency hell in the world of building shared libraries for closed-source hosts is defined as:

  • you have to build against system library versions that have the same ABI and same versions
  • you have to build with exactly the same compiler
  • you have to build against system headers that match the libraries
  • you have to build with non-system libraries that match the system libraries. Fancy some boost perhaps? Or some fancy OpenEXR library? Or some image compression libraries? If you do, we need to talk. Trust me, afte going through this ordeal a few times you will reconsider their use and appreciate the zen of doing without.
  • you (mostly) have to build with non-system libraries that match the versions used by your host application

That leads to an interesting development cycle. At the beginning of that process, if you are lucky and starting early, you will have a machine that carries somewhat-matching dependency versions for all of the above (or the dependency versions are obtainable). At the beginning you will invariably struggle with obtaining and installing all that stuff since it by definition is going to be obsolete already, but that is mostly manageable. I remember that I didn't have to install anything special back when I was building on 10.5 at the beginning of SyLens' lifetime.

More on this...

Suspects: Mac

Why I am still using Jeweler

Rolling Ruby gems is an artistic affair. Fashions change. Sometime ago it was considered fine to edit your gemspec.rb by hand. Somewhat later a trend of using Hoe emerged. A little while later Bundler came into the picture, and it had a command as well to generate gems. Wind a few more years - and the fashion is now to roll with gem new.

The funny thing about it is that multiple people tend to jump on it and migrate their projects from one gem setup/maintenance system to another. It is just like with haute-couture or testing frameworks -- it is cool as long as you are using the latest one in fashion at the moment. Get distracted for just a couple of months - and you are no longer the fashion club, but a retrograde old bureaucrat stuck in his old ways of doing things.

However, not many people focus on what is way more important in the story of making Ruby gems and makine them actually shine - the stability. Let's go to an example. With my modest portfolio I clock over 20 gems in my decade of doing Ruby for fun and profit. Repeatability is way more important for me in this process than any fashions currently being flung around. In my view, for me as a maintainer of a whole salvo of gems, I need to have a few very simple conditions that are met by whatever tool I use for rolling gems:

  • There should be a command I can easily memorise to initialise a blank slate gem, and it should have the right setup in place.
  • All tests/specs for the gem should run with $bundle exec rake, no exceptions.
  • I should be able to do a release, including tagging and pushing it to the gem server, with $bundle exec rake release
  • I should not have to edit any files except of the changelog/version to roll such a release. Simple git history / gitignore is sufficient.
  • The gem set up with that tool of choice has to be runnable by Travis, and for most of my gems I still support Ruby 1.8.7

Having this process helps me by reducing the friction to release gems. When I want to release a library, absolutely the last thing I want to be worried about is how to streamline the workflow of doing those things. Simply because if I do, each new gem that I release or update is going to obtain a release process of it's own - it's just like building plugins for expensive post-production applications all over again. Every new version you roll, for any new version of it's dependencies, becomes a unique snowflake - one has a Gemfile, another one is managed by a manual gemspec.rb, yet another one assembles itself from git... and it goes on and on until you have to actually check what kind of release pipeline du jour has been in effect when you were last twiddling with a certain gem.

The longevity of many of my projects - tracksperanto is no exception with a running history of regular updates over it's 5 year existence - also owes to a stable release pipeline.

I've only went through the changes in the gem release pipeline twice. First switch was from manual gemspec.rb editing and a hodgepodge of Rake tasks to Hoe. The second was from Hoe to Jeweler, because I was unable to make Hoe function on Travis with the vast array of Ruby versions I wanted and because I got fed up with the manual MANIFEST file, where I always forgot to add a file or two.

So far Jeweler, with all of it's possible problems and limitations and an extra dependency, given me the most precious thing in the gem release/maintenance process - and that is that I don't have to think about that process. For any single gem that I maintain, I know that brake followed by brake release is going to do the update I am after - and I can concentrate on the code that my library is offering instead of the fashions of the build pipeline.

That is way more important, and way more precious to me than knowing that I am following the latest trend in a volatile universe of open-source. I am ready to pay the price of being called old-fashioned and having extra 10 lines in my Rakefile, along with a dependency practically none of my users will even download. By corollary, it means that your pull request to my projects proposing to remove Jeweler and go about some more bikeshedding is likely to be rejected. Not because I am a jerk, but because it supports a repeatable process I have developed muscle memory for, and changing that muscle memory is the last item on my priority list.

And I suggest you, dear reader, to do the same - pick a rubygem release/bootstrapping process that works for you, verify it, trust it and stick to it, instead of joining the bikeshedding fest - whatever that process might be. What your actual gem does is way more important than what you are using to roll it.

Suspects: Веб-стройка

On the benefit of laptop stands

When you look at pictures from trendy startup offices you often see laptop riser stands.

One might think that you would do that to make your desk look neater. Or that it is for better posture. Or just to have yet another neat Apple-y looking apparatus in your vicinity for more hipster look. However, there is a benefit to laptop riser stands that some do not quite see upfront.

However, there's more to it than meets the eye.

More on this...

Suspects: Mac

Checking the real HTTP headers with curl

curl, the seminal swiss army knife of HTTP requests, is quite good at many things. Practically everybody knows that you can show headers using the -I flag:

$curl -I http://example.com/req

However, this is plain wrong in a subtle way. See, I sends a HEAD request, which is sometimes used to probe the server for the last modification date and such. However, most of the time you want to check a real GET as opposed to a HEAD. Also, not all web frameworks will automatically implement a HEAD responder for you in addition to a GET responder. It's also downright misleading because with quite a few proxies the headers you are going to be getting from the server will be different for a GET as opposed to a HEAD.

To perform a "real" GET, hit curl with a lowercase i as opposed to upprecase.

$curl -i http://logik-matchbook.org/shader/Colourmatrix.png

However, this will pollute your terminal with horrible binary-encoded strings (which is normal for a PNG after all)... There are ways to do a full GET and only show a header, the easiest being doing this:

$curl -s -D - http://logik-matchbook.org/shader/Colourmatrix.png -o /dev/null

Works a treat but is long and not memorizable. I put it in my .profile as headercheck:

alias headercheck='curl -s -D - $1 -o /dev/null'

So anytime you want to check headers with a real GET request, hit:

$headercheck http://url.that/you-want-to-check

and you are off to the races.

Suspects: Веб-стройка

OWC Data Doubler caveat with 10.9 Mavericks

Shoved a spanky Samsung SSD into my MBP using a Data Doubler from OWC - easy to confuse with the DiskDoubler of yore, by the way.

So, pried open the laptop, put in the SSD into the doubler plate, screwed everything back in. Turns out due to somewhat inferior SATA port on my particular MacBook Pro model the disk would never work properly in the first place. The Mavericks installer just stopped with the "Installation failed" message, leaving the partition for DiskUtility to repair.

The fix is surprisingly head-on. Apparently the SATA port used by the optical drive is not so good after all, you got to have your 6G SSD on your primary SATA, where the old hard drive is. Just swap the old HD with the new SSD, so that you have a SATA 3G device in your doubler plate, on the not-so-good SATA port destined for the optical drive.

So off goes the lid again, the SSD and the old hard drive swap places and the Mavericks install then proceeded as it should, rewarding me with a pristine desktop.

SSD bliss.

Suspects: Mac

Tracksperanto is fully Ruby 2.0 ready

Just added Ruby 2.0.0 to Tracksperanto's Travis config and the results are in. Our compatibility matrix now looks like this:

all green

So now we also happily run 2.0.0 in addition to 1.8.7 and 1.9.x. Feels nice to stay compatible and current at the same time.

This also means that likely most of the Tracksperanto dependencies are also compatible with 2.0.0.

For modern development Javascript indeed is a s̶h̶i̶t̶ dissapointing language

UPDATE: A much nicer JS bile from Armin Ronacher, where I subscribe to every word except for the Angular parts (since I skipped Angular entirely, because I am not smart enough to understand it).

I'm sorry, but the Crockford arguments do not cut it.

Javascript is so bad, on so many levels - it's not even funny. This is why I am so surprised everyone jumped on the node bandwagon with such excitement - yes, Node is faster than Ruby, but it's unfathomable to me that someone in his clear mind would want to rewrite his app in Node without being 100% focused on the evented model.

JS has inherent, deep issues. They are not solvable by making a new ECMA spec. They are not solvable by wrapping a better syntax around it like Coffeescript does. They are not solvable by standardizing on a require implementation or by introducing classes. There is an ECMA language with classes - it's called ActionScript and it's just as shitty as JS itself. These warts just are - and as long as the masses accept it as the status quo it's going to be exactly like the PHP framework landscape to this day: everyone and their mother will be spending man-years trying to create an infrastructure of tools around a shit language that will resist those efforts every single second.

Let me explain why I say that JS is awful. Of course, there are nice things in it - but the problem is that their utility is disputable. Prototypal inheritance, for example, is severly limited in utility - because all it offers you are function overrides. The "everything is a function" approach, while also gimmicky (look ma, I can also call this!), is also not particularly useful - because a function is not an object, not a datastructure that can carry data.

And then the real warts begin. Let's simply enumerate:

JS has callable attributes

This is a shitty design decision which most of the languages have made right at the start. In retrospect, it's difficult to blame the designers because they might have had performance issues - and, to boot, if you are not used to message-passing language the whole idea of "some attributes are callable and some are not" seems absolutely legal.

Hobjects are unusable for stable keys

The mixup between objects and hashes is also a very bad idea, because it defies the premise that objects can have metadata on them - which alllow you to establish a rudimentary type system or at least any kind of introspection.

Fobjects are unusable for type systems since an object does not carry any type information.

This one is a biggie. Even in the Ruby world, where everything is happily quacking like a duck, we often use the Object#class to get information about an object. A fairly standard procedure of styling an HTML element to a certain model object for example:

    <div class='<%= model.class %>' id='<%= [model.class, model.id].join %>' >…

is impossible in JS because the only types offered are 'Object', 'function' and primitives. It's awful in all the ways Java is awful, and then some.

Null everywhere

Trying to use a constant with a wrong name by mistake?

 MyApp.SYNC // should have been MyApp.SYNC_FETCH

Nothing will happen. Since ojects are hashes, and the language provides zero facilities for constants, our constant with the wrong key will be undefined, and will happily bleed into the callee. This makes stack traces huge.

Callback hell

JS lacks a decent facility for deferreds. It's created for evented execution, which is fundamentally not multithreaded. Your calls are interspersed with event callbacks - when your code is idling, callbacks are executed. However, JS lacks a simple facility for doing this:

 var res = await AjaxReq.fetch('/long-request')
 // because you are waiting for a result, here the runtime would
  // schedule event handling, DOM redraws and whatever else it can 
  // squeeze in while you await
 res.name // this will be only executed once res is available

Of course the JS community is doing exactly what the PHP community has been doing all along - they try to fix a bad language with even worser tooling. How? By using more callbacks and, on a good day, callback chains

 // 48 lines of code down
// 23 lines down

In a normal situation this would have been fixed simply by adding a wait primitive to the language which would schedule the events when the result is still being fetched.

The proliferation of callbacks leads to the programming style where everything is async, but as a matter of fact 80 percent of the code you write has to be synchronuous. Simply because 80 percent of the programming is about doing one motherfucking thing with another motherfucking thing and you need both of them to complete the action.

Terrible exception handling

Exception handling in JS is terrible. It exists, but in a rudimentary form - you can see the call stack (that's going to consist of a dozen anonymous functions and one function that is named - that on a good day), and you can see an error message. Since I am not bashing on the DOM, I will only mention two most often encountered errors:

    undefined is not a function
    cannot call property 'xyz' of undefined

Both of these stem from the fact that fu(ck)bjects in JS have no defined methods - they only have properties. The JS runtime will never have a way to know whether your fubject is supposed to have a method that can be called, or a property of a certain name - it will just assume it being a missing hash key. It just doesn't know any better!

I remember people in the Ruby community complaining about Ruby's backtraces and error messages being not good enough - and Rubinius went to address this. You know where error messages are particularly fucked up? In fucking Javascript. Because the two absolutely basic, crucial exceptions that you want to get and see every single time - NameError (when you are adressing a class-ish or constant-ish something which is not defined) and 'NoMethodError' are simply impossible with the sloppy way the language is built.

And yes, functions are nice, and prototypes are nice and all that - but if you want to build a JS app having any kind of reasonable complexity, you will be bound to write code like this:

var cv = Marionette.CollectionView.extend({
  itemView: MyApp.Views.WidgetView;

What is the error that you will get if MyApp.Views.WidgetView is not defined yet? undefined is not a function of course! Where will you get it? When the CollectionView will try to instantiate your item view. Not when you define the variable cv, no no! It will explode later, and rest assured - you will be tearing your hair out for a couple of minutes until you see where the error came from.

And why? Simply because everything is a hash and the language is incapable of doing any kind of introspection.

It absolutely perplexes me that people who have used Ruby are moving to Node and calling it a good tool. Node might be great. The language running in it is shit though, and until this gets at least marginally better I will do without Node just fine, thank you.

I can understand that some people wanted to escape the MRI infrastructure by going Node, because - you know - learning Japanese is hard. If you don't speak Japanese, your chances of making a noticeable improvement in MRI approach zero - you will not be treated nicely.

JS is shit, and if we care at least a tiny bit we should do everything in our power to either sunset it, or to move it into the 'assembler for the web' realm where it would be a vehicle for decent languages that can actually get the job done without driving you up the wall. Being nice will not help it, and CoffeeScript is not radical enough. Support your local transpiler initiative today.

Update: nice to know that some people are considering alternatives

Suspects: Веб-стройка

Aspirine not included.