julik live

Your minimum viable Rails service pattern

This post is very prescriptive - not because I want to be didactic, but because it offers a number of very small techniques that work for me, and work very well. Take it with a grain of salt, and of course modify at will - but please don't get distracted by the imperative tone. So...

A service fits best in a standalone Module

which is is as close as you can get to a namespaced, yet freestanding function. Services are doers by definition (or Commands, if you will). So don’t make them objects. Don't do this:

payment_processing = PaymentProcessing.new(user, purchase)

Note how even the name (a noun called "Processing" - what is this? an abstract processing what exactly?) feels wrong. You are entering the execution in the kingdom of the nouns with this pattern, but moreover - you are opening doors for yourself to perform destructive actions in the PaymentProcessing#initialize. For instance, I once had to deal with a Service object that was actually a Command, but it managed to perform both an SQL UPDATE (destructively touch the database!) and execute an HTTP call to an external service, all of the above in the constructor. In addition to unpredictability, a service like this is hard to test. Instead of having to force yourself to make a service that has a constructor only to fill it up with values, use a module that is a container of functions:

PaymentProcessing.process(user, purchase)

Creating these is very cheap, even if you are using more methods within the module. Module methods are very easy to attach using the extend self idiom:

module PaymentProcessing # "DoingThings" is an acceptable naming convention for a module, better than for an object
  extend self # now each method you define can be called on the module itself

  def process(user, purchase)

  def debit(user)

  def prepare_invoice(purchase)

If you really really want inheritance in this scenario, you can of course always use a class instead. Do not use class variables - limit yourself to method-local variables only - most of threading issues and pretty much all Rails reloading issues go out the window once you do.

When dealing with heavy models, flow control with Exceptions is totally fine

For controllng what happens to your command during execution, follow a very simple branching strategy:

Anything that does not raise an exception is a happy path, anything that does is a deviation.

Situations where there are multiple happy paths are exceptionally rare in my experience, and maybe you need to branch inside the service if you encounter them. But in the basic sense, here is how your controller action will look:

def create
  user = User.find(params.require(:user_id))
  payment = Payment.create!(params.require(:payment))
  PaymentProcessing.process(user, payment)
  render :nothing, status: 201
rescue ActiveRecord::RecordInvalid # when Payment.create! fails...
  # edge case
rescue PaymentProcessing::InsufficientFunds
  # another edge case
rescue PaymentProcessing::UserBlocked
  # and another
rescue PaymentProcessing::UserDataustBeStoredInFreedomostan
  # and another...
rescue PaymentProcessing::PaymentGatewayError
  # and another...

This way you get the benefit of pattern matching on return values that functional languages brag about. Do not be misled that Ruby does not have bona-fide matching on the return value - but it doesn’t unfortunately present us with a viable failure on uncaptured result. There is no syntax-level option for a case statement without the default case. Exceptions, however, do give you opportunities for having an explicit failure scenario when you do not take care of the default (catch-all) outcome failure case.

Do not use rescue_from in this pattern, because it separates your branching on the result of the command from the place where the command gets executed. And the Ruby "rescue without begin” idiom is just shorter, and more readable, and less Rails-specific.

.() is a neat trick but you don't need it

Making services callable is possible without.

Do not use the fancy .() method definition, like Trailblazer does. The reason is this: methods like this are a show off - “look what we can do in Ruby”. This stuff is all fine and nice but when you are in a bind trying to figure out why something does not work correctly in your application the last thing you want to be doing is peeling off homegrown syntax sugar like this. If you really want callables, Ruby has a whopping two idioms to help you - having an object respond to call and having it respond to []. You can even turn your service into a Proc and make it usable as an iterator:

module ResetPassword
  extend self

  def call(user)

  def to_proc

# batch_reset_controller.rb
def reset
  users = User.find(1, 2, 3, 4)
  users.map(&ResetPassword) # Seriously, this works!
rescue SpecialCase
  # with the caveat that you don't know on which User that happened

Having long method signatures in a Service is OK

Do not use “multi-parameter constructors” just to be able to do this:

service = PaymentProcessor.new(user, payment, gateway, mutex, logger, …)

If you need 5 arguments to perform a certain operation, just make them the arguments of your callable command. Here is why: for executing the operation you need the caller to conform to a contract. This contract implies the availability of all of the given collaborators (the User, the Mutex, the Logger and whatnot). When executing the operation, you need to check for the presence of all of these parameters. If you move the parameter checks into the constructor of the service, and the execution to the perform method (or whatever method that handles the actual operation you are performing), you are divorcing the contract for the operation from it’s execution - and it is very likely you will make a mistake because they no longer share a unit of scope (and a unit of execution). As a bonus, use keyword arguments while you are at it to exclude mistakes with positional arguments at the outset:

PaymentProcessor.process(user: current_user, gateway: PaymentGateway.default, …)

If you omit a keyword argument it will make Ruby give you clear exceptions on a contract violation. Additionally, if you introduce extra parameters you can plug them with default values in the operation itself:

module PaymentProcessor
  extend self
  def process(user:, gateway:, logger: Rails.logger)
    # gives us a logger, both injectable and default for when we don't care that much

Do not be afraid of long keyword argument lists. See them as an extended contract of your Command. If you find them very unwieldy - replace the parameter list with an object that will contain all the keyword arguments. You can use anything that responds to to_hash as a package for keyword arguments:

connection_config = EnterpriseDatabaseConnectionConfig.new
# will call `connection_config.to_hash` to extract a Hash keyed by Symbols. AND you can test
# `MyCompany.....#to_hash` in a separate unit test, as a bonus...
pg_conn = pg.connect(**connection_config)

Interesting read on exceptions versus ActiveRecord here.

When exceptions are actually expensive

Bear in mind that exceptions are expensive due to stack unwinding. So, using exceptions for flow control, like many things, is appropriate where it is appropriate and not appropriate in some other situations - for exampe, in tight loops. Where this really matters, you can use branching on the return value like you would do in "pedestrian" languages with "full-manual" error handling, like C and Go. For instance recent implementations of non-blocking writes in IO etc. use that idiom:

case socket.write_nonblock(bytes)
when :iowait # The socket is clogged, do a pass
when Fixnum # the number of bytes written
else # nil, the send has completed or there was nothing to send

Bear in mind however, that this is useful for tight loops or IO-intensive scenarios only, since it degrades your control flow to explicit branching on return values. This is indeed (at least in my opinion), not any better than

f, err := file.Open(path)

that you might have in some other languages claiming “explicit error handling” as an extreme fom of virtue.

Oh, Sprockets

There's been this whole conversation on Sprockets lately. I must admit I tried hard to stay away from the discussion for the most part, but I thought that maybe providing a slightly different perspective might be interesting (or at least entertaining) for some.

The reason for writing this piece is, among others, because I feel let down. I must admit that right now I find myself in a state that can best be described as JavaScript paralysis. This is what comes after JavaScript fatigue. I hope to recover soon, since my ability to deliver is severely impaired because of it. Maybe Gile's book will help. I hope it will otherwise I might go into therapy. And I feel that the Sprockets situation is at least partially to blame for this, if not directly then by collateral - so just let me vent for a little, mmkay?

For those who didn't follow - Giles says that Sprockets is not worth saving, and advocates integrating with the wider JS ecosystem and trying alternative approaches instead. Schneems took on the gargantuan task of dragging Sprockets, kicking and screaming, into the brave new world of the blossoming JS we all love so much. Please read both the articles before reading this on.

DISCLAIMER: this is a 100% opinion piece.

More on this...

Bad API design: a whirlwind tour of strong_parameters

Lately I have been doing time upgrading an application from Rails 3 to Rails 4. Obviously, one of the key parts of the operation in that case becomes a move from attribute protection (attr_accessible'/attr_protected') to strong parameters. Doing so, I have actually struggled a great deal. And after a number of hours spent on reflecting on the whole ordeal I came to the conclusion that it was not entirely my fault, and not even a problem with how the previous state of the application was when the migration had to be done.

The problem is strong_parameters itself. See, when you design an API, you usually assume a number of use cases, and then streamline the workflow for those specific use cases to the best of your ability. When situations arise where your API does not perform satisfactorily, you basically have two choices:

  • Press on and state that these situations are not covered by your API
  • Re-engineer the API to accomodate the new requirements

When designing strong_parameters, the Rails gang apparently went for the first approach. Except that since stating that "you are on your own" is not often encouraged as a message to developers-customers, it has been pretty much been swept under the rug. As a result, strong_parameters have been released (and codified as The solution to input validation) without (in my opinion) due thought process.

Since I was finally able to wrestle through it, behind all the frustrations I could actually see why strong_parameters did not work. It did not work because it is a badly designed API. And I would like to take some time to walk through it, in hopes that it can reveal what could have been done better, differently, or maybe even not at all.

So, let's run through it.

It is both a Builder and a Gatekeeper

By far the biggest issue with the API is this. It is both something we Rubyists tend to call a builder, and something we tend to call a gatekeeper - this is more of my personal moniker. Let's explain these two roles:

  • A Builder allows you to construct an arbitrarily-nested expression tree that is going to be used to perform some operation.
  • A Gatekeeper performs checks on values and creates early (and clear) failures when the input does not satisfy a condition.

For example, Arel is a good citizen of the Builders camp. A basic Arel operation is strictly separated into two phases - building the SQL expression out of chainable calls and converting the expression into raw SQL, or into a prepared query. Observe:

where(age: 19).where(gender: 'M').where(paying: true).to_sql

You can immediately see that the buildout part of the expression (the where() calls) and the execution (to_sql) are separated. The to_sql call is the last one in the chain, and we know that it won't get materialized before we have specified all the various conditions that the SQL statement has to contain. We can also let other methods collaborate on the creation of the SQL statement by chaining onto our last call and grabbing the return value of the chain.

XML Builder is another old friend from the Builders camp, probably the oldest of the pack. Here we can see the same pattern

b.payload do
  b.age 18
  b.name 'John Doe'
  b.bio 'Born and raised in Tennessee'
b._target # Returns the output

The obtaining of the result (the output of the Builder) is a definitely separate operation from the calls to the Builder proper. Even though calls to the Builder might have side effects when it outputs at-call-time, we know that it is not going to terminate early because the output object it uses is orthogonal to the Builder itself.

Strong parameters violate this convention brutally. The guides specify that you have to do this:

parameters.permit(:operation, :user => [:age, :name])

If you have strict strong parameters enabled - and this is the recommended approach since otherwise parameters you did not specify will simply get silently ditched - if you have even one single key besides :operation and :user at the top level, you will get an exception. If you supply a parameter that is not expected within :user - the same. The raise will occur at the very first call to permit or require. This means that the "Gatekeeper" function of strong_parameters is happening within the Builder function, and you do not really know where the Builder part will be aborted and the Gatekeeper part will take over.

Since we are so dead-set on validating the input outright, at the earliest possible opportunity, this mandates an API where you have to cram all of your permitted parameters into one method call. This produces monstrosities like thos:

    paid_subscription_attributes: [
      {company_information_attributes: [:name, :street_and_number, :city, :zipcode, :country_code, :vat_number]},

Those monstrosities are necessary because applications designed with use of nested attributes, and especially applications designed along the Rails Way of pre-rendered forms, will have complicated forms with nested values. And those forms are very hard to change, because they are usually surrounded by a whole mountain of hard-won CSS full of hacks, and have a very complex HTML structure to ensure they layout properly.

In practice, if we were to take the call above and "transform" it so that it becomes digestible, we would like to first specify all the various restrictions on the parameters, and then check for whether the input satisfies our constraint - divorce the "Builder" from the "Gatekeeper". For instance, like this:

user_params = params.require(:user)
user_params.permit(:email, :password, :password_confirm, :full_name, :remember_me, :profile_image_setting)

paid_subscription_params = user_params.within(:paid_subscription_attributes)
paid_subscription_params.permit(:terms_of_service, :coupon_code, :company_information_attributes)

company_params = paid_subscription_attributes.within(:company_information_attributes)
company_params.permit(:name, :street_and_number, :city, :zipcode, :country_code, :vat_number)

user_params.to_permitted_hash # The actual checks will run here

This way, if we do have a complex parameter structure, we can chain the calls to permit various attributes and do not have to cram al of them into one call.

More on this...

Suspects: Веб-стройка

On a small team, not being dicks sometimes trumps efficiency

There is an inherent difficulty to maintaining velocity. We always want things more streamlined, more efficient, more lean - sky is the limit, really, if the technical realm of a product is not micromanaged by the higher echelons of the company management, but is upgraded by grassroots effort. A good team of engineers worth their salt will, as wel know, improve and clean the product of must of technical debt - all a good manager has to do, really, is not to impede that process.

There is an inherent difficulty to maintaining velocity. We always want things more streamlined, more efficient, more lean - sky is the limit, really, if the technical realm of a product is not micromanaged by the higher echelons of the company management, but is upgraded by grassroots effort. A good team of engineers worth their salt will, as wel know, improve and clean the product of must of technical debt - all a good manager has to do, really, is not to impede that process.

There is an inherent concern however, which becomes especially important when the teams are small. Sometimes, velocity and efficiency need to be sacrificed a little if the team wants to preserve equilibrium in human relationships.

More on this...

Suspects: Веб-стройка

The sad state of human arrogance today

Lately an article has been making rounds on HackerNews where eevee laments on his tribulations when installing Discourse.

I've read it cover to cover, and I must admit it did strike a few notes with me, especially because I write Ruby web apps for a good decade now, with Rails and without. Most of the points named in this article are valid to some extent, but I profoundly disagree with the overall tone and the message.

First let's look at the narrative:

  • Ruby is not a primary language eevee is faimilar with (he's primarily a Python/Rust guy from what I could see - because I have been reading his other articles just a few days prior).
  • He does not have a lot of experience deploying a modern Ruby stack in production for web apps
  • Through his experience with Python he probably misses the fact that the Python deployment story is probably just as painful as Ruby's at this point. Moreover, some horrible relics of Ruby packaging (gemsets) still exist at large in the Python world (virtualenv).
  • He picked a project which is explicitly pushing the envelope in it's usage of fancy tools, and thus indeed wants to have it's mother and the kitchen sink for dependencies. This is not because you need fancy tools, but because for a modern web app you do need search, you do need a job queue, you do need push messaging.
  • Exactly because the developers of Discourse (whom I admire greatly) realise that the dependency story in Discourse is effin hard they suggest, loudly and clearly, to deploy it via images or containers. Eevee chose to use neither of these approaches, but facing the consequences of this decision proved to be a world of pain (exactly as predicted).
  • He has a complex Linux configuration, which to me (from my uneducated perspective at least) looks like a Linux build that has been accrued over the years with all sorts of manual tweaks (90% of them probably having to do with X11 and sound of course), and migrated over and over - as a result of which you indeed end up with a 32 bit runtime on top of a 64 bit kernel. This, for tools that assume a default configuration for most of situations, is a recipe for butthurt.
  • He also had to use a PostgreSQL extension, which does not ship as a default part of the package.

Instead of raising my hands in violent agreement with him, or rebutting his post point by point, I would like to look at the actual reasons why this stuff happens.

More on this...

Quitting VFX, one year on

So that might be a surprize to some people reading this – if someone still reads this blog at all. But around Christmas 2013 I have decided that I had enough. Enough of the being pleasant. Enough of the wonderful Aeron chairs and enough of the Flame. I was just sick to my stomach.

Throughout my life I have been one of those blessed few who never had to work at a place they hated. An essential benchmark of the quality of life for me is that I wake up in the morning and feel like going to work. When that feeling is gone, and the fact of going to work becomes dreadful instead -- that is the clearest indicator for me that it is time to move on.

This in fact turns a major page in my life. I have dedicated 9 years of it to the Flame, just like I wanted to. I have seen the demise of the SGI Tezros (I will forever remember what $hinv stands for and what the Vulcan Death Grip is). I have done some great compositing work, from set extensions to beauty to morphing. I have worked with some of the greatest, most wonderful art directors in Amsterdam – it did turn out that most of them have grey hair, a beard, a belly and an impeccable sense of humor - and, first and foremost, artistic vision.

I worked at an amazing company which gave me so much I will be forever grateful. Of course there were ups and downs, but I was incredibly lucky when I stumbled into it by accident in my search for an internship.

I have enjoyed some very interesting projects, mastered Shake and Nuke, became one of the best matchmovers in town. Have beta-tested new versions of Flame and had the pleasure of meeting the wonderful, predominantly-Francophone team of it's makers. I've met a mentor who transformed me from a rookie into a good all-around Flame op. I got pretty much everything I wanted.

For a month I even lived at the office, when the situation with my landlady became unbearable. I worked my year as the night shift Flame operator - so that chevron is earned too.

The logical next step would be London. The problem with that is - I was not prepared to dump my residence status in the EU just for the privilege of working on the next fab rendition of Transformers IX. You see, most people in Western countries are equal, but people with less fortunate passports are somewhat less equal than others. So if I were to move on to the UK it would mean a new alien status, a new residence permit, a new work permit, and all the other pleasures such moves entail.

Also I got tired of client work and all of it's facets that tend to drive you to exhaustion. It was extremely challenging and very rewarding for me - especially me being an overall introvert that I always was - but at a certain point I've started losing the grip. The guy that once was Julik started to become some other person. Some other person that I didn't like seeing in the mirror when I brushed my teeth. Some person that had to develop reactions to the outside stimuli that I did not condone. In short - the beautiful honeypot I was so eager to devour started to consume me.

So after having dedicated 9 years to visual effects and Flame, it was time to turn the page. Since I was developing pretty much all those 10 years (I started about a year prior to this blog being first put online) becoming a Ruby developer seemed par for the course. I love programming, I love Ruby, and I've made some great stuff using it. Tracksperanto has become the de-facto go-to piece of kit for many, and even though I never got code contributions for it – most of the post bunch speaks Python only – I was able to maintain it with excellent longevity and it saved many a matchmove project, both for myself and for many others.

Work/life balance

This has more to do with company culture, but in the past years I've learned that when you process yourself into a skewed work/life balance you have to understand the benefits and the costs. Essentially, when doing all-nighters, participating in sprints, crunch times, enabling your mobile phone at night - think very well what you are giving up, who benefits from it and whether the compensation is adequate. Overtime is not free - you are paying for it with your health, with your family life, with your friendships and love and care and affection. Being on call is not free.

You absolutely should do it when doing the work you love - but consider whether what you are getting for it. When in doubt, re-read Loyalty and Layoffs.

The game here is detecting when the overages and pressures stop being of benefit to you, and only stay beneficial to your employer or client. It is an extremely fine line, and most good managers are good at masking it from view. It took me a full 15 years of working in different roles to get the feeling for when enough is enough, and even at that I had to switch careers to restore the balance.

Don't think that development is foreign to crunchtime (read some jwz if in doubt) but it is highly likely that you will be confronted with it sooner in a VFX career. Remember - how much of it you will take, and what for - is your decision. Nobody - not your employer, not your manager, not your parents - are responsible for the decisions you make about it.

More on this...

Matchmover tip: obtaining the actual field of view for any lens using a survey solve

The best camera is the one that's with you Henri Cartier-Bresson

For a long time we have used a great technique at HecticElectric for computing the lens field of view (FOV) values. When you go to film something, you usually record the millimeter values of the focal length of the lenses you use (or "whereabouts" if using zooms instead of primes). This approach, however, is prone to error - because 3D software thinks in terms of abstract field of view angle, not in terms of the combination of the particular focal length + particular sensor/film size.

So we devised a scheme to reliably compute the field of view from shots of specifically chosen and prepped objects, that we call shoeboxes. This yields very accurate FOV values for any lens (including zooms at specific settings) and can be used with any camera/lens combination, including an iPhone.

More on this...

Building Nuke plugins on the Mac, Christmas 2014 edition

Building Nuke plugins on the Mac, Christmas 2014 edition

As 2014 is folding to a fanfare close, we here at julik live would like to take you, dear reader, to 2009. To recap what happened in 2009:

  • The Icelandic government and banking system collapse
  • Albania and Croatia are admitted to the North Atlantic Treaty Organization (NATO).
  • The Treaty of Lisbon comes into force.
  • Large Hadron Collider gets reactivated.
  • Apple Computer Inc. releases OSX 10.6 Snow Leopard

That last fact is of uttermost importance for us. We are going to time-travel to 2009 to see how life is like in a world where everybody uses systems so old they cannot be obtained legally. To the world of high-end VFX software.

See, I made a Nuke plugin back in the day. Because I really wanted to and I needed it. As faith would have it, I also really wanted to share it because it could be useful for many other VFX artists using SynthEyes -- which I still consider the greatest, fastest and the most affordable 3D tracking package ever.

However that plugin development thing puts me way out of my comfort zone. See, this is C++ we are talking about - but if only. We are also smack in the middle of the DLL hell because, as you might imagine, a plugin for someone else's software is a DLL that you have to build (or a .dylib, or a .so - a shared library for short). Now, 5 years have passed and I somehow managed to maintain a build pipeline for the plugins intact - for quite a long while for example I specifically delayed upgrading from 10.5 just to avoid everything that is related to this dependency hell. Dependency hell in the world of building shared libraries for closed-source hosts is defined as:

  • you have to build against system library versions that have the same ABI and same versions
  • you have to build with exactly the same compiler
  • you have to build against system headers that match the libraries
  • you have to build with non-system libraries that match the system libraries. Fancy some boost perhaps? Or some fancy OpenEXR library? Or some image compression libraries? If you do, we need to talk. Trust me, afte going through this ordeal a few times you will reconsider their use and appreciate the zen of doing without.
  • you (mostly) have to build with non-system libraries that match the versions used by your host application

That leads to an interesting development cycle. At the beginning of that process, if you are lucky and starting early, you will have a machine that carries somewhat-matching dependency versions for all of the above (or the dependency versions are obtainable). At the beginning you will invariably struggle with obtaining and installing all that stuff since it by definition is going to be obsolete already, but that is mostly manageable. I remember that I didn't have to install anything special back when I was building on 10.5 at the beginning of SyLens' lifetime.

More on this...

Suspects: Mac

Why I am still using Jeweler

Rolling Ruby gems is an artistic affair. Fashions change. Sometime ago it was considered fine to edit your gemspec.rb by hand. Somewhat later a trend of using Hoe emerged. A little while later Bundler came into the picture, and it had a command as well to generate gems. Wind a few more years - and the fashion is now to roll with gem new.

The funny thing about it is that multiple people tend to jump on it and migrate their projects from one gem setup/maintenance system to another. It is just like with haute-couture or testing frameworks -- it is cool as long as you are using the latest one in fashion at the moment. Get distracted for just a couple of months - and you are no longer the fashion club, but a retrograde old bureaucrat stuck in his old ways of doing things.

However, not many people focus on what is way more important in the story of making Ruby gems and makine them actually shine - the stability. Let's go to an example. With my modest portfolio I clock over 20 gems in my decade of doing Ruby for fun and profit. Repeatability is way more important for me in this process than any fashions currently being flung around. In my view, for me as a maintainer of a whole salvo of gems, I need to have a few very simple conditions that are met by whatever tool I use for rolling gems:

  • There should be a command I can easily memorise to initialise a blank slate gem, and it should have the right setup in place.
  • All tests/specs for the gem should run with $bundle exec rake, no exceptions.
  • I should be able to do a release, including tagging and pushing it to the gem server, with $bundle exec rake release
  • I should not have to edit any files except of the changelog/version to roll such a release. Simple git history / gitignore is sufficient.
  • The gem set up with that tool of choice has to be runnable by Travis, and for most of my gems I still support Ruby 1.8.7

Having this process helps me by reducing the friction to release gems. When I want to release a library, absolutely the last thing I want to be worried about is how to streamline the workflow of doing those things. Simply because if I do, each new gem that I release or update is going to obtain a release process of it's own - it's just like building plugins for expensive post-production applications all over again. Every new version you roll, for any new version of it's dependencies, becomes a unique snowflake - one has a Gemfile, another one is managed by a manual gemspec.rb, yet another one assembles itself from git... and it goes on and on until you have to actually check what kind of release pipeline du jour has been in effect when you were last twiddling with a certain gem.

The longevity of many of my projects - tracksperanto is no exception with a running history of regular updates over it's 5 year existence - also owes to a stable release pipeline.

I've only went through the changes in the gem release pipeline twice. First switch was from manual gemspec.rb editing and a hodgepodge of Rake tasks to Hoe. The second was from Hoe to Jeweler, because I was unable to make Hoe function on Travis with the vast array of Ruby versions I wanted and because I got fed up with the manual MANIFEST file, where I always forgot to add a file or two.

So far Jeweler, with all of it's possible problems and limitations and an extra dependency, given me the most precious thing in the gem release/maintenance process - and that is that I don't have to think about that process. For any single gem that I maintain, I know that brake followed by brake release is going to do the update I am after - and I can concentrate on the code that my library is offering instead of the fashions of the build pipeline.

That is way more important, and way more precious to me than knowing that I am following the latest trend in a volatile universe of open-source. I am ready to pay the price of being called old-fashioned and having extra 10 lines in my Rakefile, along with a dependency practically none of my users will even download. By corollary, it means that your pull request to my projects proposing to remove Jeweler and go about some more bikeshedding is likely to be rejected. Not because I am a jerk, but because it supports a repeatable process I have developed muscle memory for, and changing that muscle memory is the last item on my priority list.

And I suggest you, dear reader, to do the same - pick a rubygem release/bootstrapping process that works for you, verify it, trust it and stick to it, instead of joining the bikeshedding fest - whatever that process might be. What your actual gem does is way more important than what you are using to roll it.

Suspects: Веб-стройка

On the benefit of laptop stands

When you look at pictures from trendy startup offices you often see laptop riser stands.

One might think that you would do that to make your desk look neater. Or that it is for better posture. Or just to have yet another neat Apple-y looking apparatus in your vicinity for more hipster look. However, there is a benefit to laptop riser stands that some do not quite see upfront.

However, there's more to it than meets the eye.

More on this...

Suspects: Mac

Aspirine not included.