Setting Cache-Control headers in CarrierWave Uploaders

TL;DR This Gist provides a monkey patch to helps set up Cache-Control for CarrierWave.

I use CarrierWave in my Rails app to deal with image uploads to S3. While setting the Cache-Control header with CarrierWave is trivial, doing so without some kind of versioning on the filename leads directly into a cache-busting nightmare.

My solution uses a timestamp for the version number. It is important to keep track of the current version, because CarrierWave can manipulate images after the initial upload. This leads to images getting “orphaned” if the version numbers get out of sync - and a bunch of 404’s for your end users.

I cache the current version on the model, and bind to CarrierWave’s “before_cache” hook to manage the version numbers. This is a pretty simple fix, but it took me quite a while to work through all the details and get a working implementation.

The public gist is here. Enjoy!

Queueing Commands in LocalStorage

I’ve been using queues a lot lately in my front end code. They’ve been great for breaking apart complex actions, and providing some stability to volatile operations.

A real-life example from G2Crowd is in our metrics gathering code. When a user interacts with our interface, we send send ajax calls to our analytics system. These metrics are used to track adoption of our features, and help us respond to problem areas in the application.

This can be tricky, because the events that we are tracking are pretty volatile. For instance, when a user clicks on a link, the browser will tear down the current page and cancel any active ajax requests. As a result we end up dropping quite a few data points.

The Queue

Enter the queue. Instead of directly running our function when the user clicks on the link, we can instead push some data into a list that will be processed later. Not only is this a great tool for pushing complex interactions into the background, it is also good for dealing with unexpected failures and timeouts.

With a little bit of planning, our queue can even be serialized into LocalStorage or Cookies. This means that our code will work across page refreshes!

So let’s start with the queue. A queue is simply a first-in-first-out list, so we’ll just wrap up an array with an ‘enqueue’ and ‘dequeue’ method, and a few hooks for convenience:

This basic queue is pretty easy to use:

The Queue Manager

So now we have our list; we can now go about creating some handlers to process the list for us. Let’s create a queue manager that will deal with each item in the list in order.

This module is a bit longer than the previous one, and may be a little daunting. After the full listing, I will break it down function by function.

First, the module takes two parameters; a queue that it will be managing, and a callback function for processing each individual entity in the list:

The ‘flush’ function is what we call to start clearing out the queue. We want to make sure that we can always call this function safely, so we use a flag to avoid flushing multiple times:

The ‘process’ function is what does most of the work for us. If there are any items in the list, then we call our handling callback with everything needed to process the current item. Otherwise we stop flushing and clean up.

The ‘next’ function is called to advance the queue forward, and start working on the next item. We pass this function to our callback, to give us a way to complete items asynchronously.

The ‘fail’ function will simply try to process the current item again. Notice that we keep count of failures, and pass the number to our hooks. This gives us an easy way to manage bad queue values if we need to later. This function is also passed to our handler callback.

In order to kick off our queue processing, we use the ‘onEnqueue’ hook to begin flushing as soon as any item is added to our queue. Notice that we are careful to avoid overwriting any previous onEnqueue handlers that have been assigned:

And finally, we return our public interface. We provide direct access to the flush call, as well as a few hooks:

Now that we’ve walked through the code piecemeal, it should be relatively easy to see how to consume it. We simply pass in our queue, and call the ‘next’ or ‘fail’ functions within our callback to advance the queue forward:

Using the Queue to send AJAX Requests

Now let’s get back to our analytics logging. Let’s say that our analytics code just sends an ajax request to an endpoint:

With our current queue manager, we can create an analyticsEvents queue to hold all events that we want to record. Our manager will be able to send the ajax for us, and will also guarantee that the ajax calls finishes successfully. In order to do this, all we have to do is pass the ‘next’ and ‘fail’ functions directly to the ajax handlers:

Now our logEvents function simply pushes items into the queue. If the ajax call fails for whatever reason, we’ll just try again, and once we’ve completed logging an event we’ll move on to the next one.

Storing the Queue in LocalStorage

So what happens if the page is refreshed? As the code currently stands, we would lose our queue, as well as any data points that we had hoped to capture.

Thanks to LocalStorage, this is very easy to deal with. We can use the hooks on our queue to save a copy of the list every time it is modified. The queue will be loaded from LocalStorage when the page is ready:

So now we can pass this persistent version of our queue into the existing queueManager object:

If the ajax call fails or gets canceled, then it will not be removed from the list. If the user refreshes the page, our code will retrieve the stored queue from localStorage and try again.

Regarding using LocalStorage directly like - this will fail across some browsers. We use a very simple polyfill on our site to improve compatibility.

Where to from Here?

There are a few additions that we could add to our queue manager to clean it up even further. In my production version of this code, I set up a timeout to call ‘fail()’ if an item is taking too long to process. We also set up a retry limit, so that after X number of failures we continue on to the next item. Finally, we catch exceptions from processing the queue directly in our manager, so that a single bad item won’t sabotage our entire list of commands.

The beauty of the queue is that these relatively complex interactions are split off into the queue manager. This makes them easy to test and reason about directly, without becoming tangled up with our business logic.

Faster I18n::Redis on Rails

When I decided to switch to redis as a backend for I18n, it seemed like an obvious choice. Everyone else was doing it! There is even a great railscast walking through the steps.

So I jumped at the chance, hoping for a quick way to decouple editing the copy on our site from the deployment process. According to what everyone was saying, because Redis is an in-memory data store I could use it without worrying about cacheing.

Imagine my chagrin when performance spiked. I really wanted this feature, but just by turning it on, loading times on many of our critical pages jumped 100-200 milliseconds.

And it was obvious why, after a few moments thought. There are sometimes hundreds of translations to be run for a given request. Each of those translations represented a round trip to redis… even for the same translation!

Redis might be in memory, but it’s remote memory. That’s still a round trip.

Thanks to a short and helpful StackOverflow conversation with Chris Heald I realized that it would take a very small step to memoize the I18n backend. All of the translations would quickly be cached into local memory, thus eliminating the latency problem.

All that was missing was some sort of cache bust, so that we could change the translations without restarting the servers.

The result of my efforts is rolled up into the cached_key_value_store gem. This is a very light wrapper around the existing I18n KeyValue backend (the one from the railscast.) This means it should work with any key-value database client with a hash-like interface.

The only difference is that this backend includes memoization of translations, and a simple versioning/cache-bust mechanism for whenever you make edits.

Writing code for both CommonJS and the Browser

I’ve been playing around with a few different approaches to this lately, this is my current preferred method:

This method assumes that whatever code I write returns its public interface, which will then be exposed properly depending on the environment.

As an example, I might expose a library like so:

Die, Caps Lock

Even without Internet Kooks and YouTube comments, Caps Lock is awful. If you are a Vim user like me, it is the bane of your existence.

Fortunately, we don’t have to live with it. Mac OS X allows us to remap the modifier keys (System Preferences > Keyboard > Modifier Keys):


Now that I know that I can get rid of my Caps Lock, there are two options for what to replace it with.


Mac OS X can directly map the Control Key over Caps Lock. The appeal of this one is the toggles: tmux uses Control-A, and Vim uses Control-W. I spend a lot of time stretching my hand around the keyboard when navigating normally, and moving the Control button up to the home row is a lot less stressful on my hands.


Using a nice tool called PCKeyboardHack allows me to map the escape key to Caps Lock. As a Vim user, this is a very commonly used key; I pound this guy relentlessly day in and day out, and it actually saves me a lot of motion because the keyboard I use (The Kinesis Freestyle, also incredible) has an oddly positioned escape key.

Which is Better?

I imagine if you are a Windows or Linux user, Control gets used a lot more than on the Mac. I’m unfortunately still on the fence on which I like better, though I’m trying out Control right now.

But now at least you understand why I keep pounding the Caps Lock key the next time I touch your keyboard.

RSpec, Spork, and Redis

I’m adding in Redis for a new feature and was running into an obnoxious error:

Tried to use a connection from a child process without reconnecting. You need to reconnect to Redis after forking.

A little bit of digging found that this was a very deliberate change by the redis team:

In my case I only came across the error when running Rspec with Spork. It makes sense to me - Spork would leave a connection open, and then attempt to reuse it inappropriately on the next run.

So what to do about it? There were a number of suggested solutions that I came across, but an easy fix for me is to just call reconnect on the Redis client directly in my Rspec.

Putting this small change in my spec_helper.rb file has resolved the issue for me:

RSpec.configure do |config| 
  # ... 
  config.before :all do
  # ... 

Fixed: Chrome back button renders JSON

On AJAX-heavy pages, you might see an issue in Chrome where a JSON response is rendered. This occurs when:

  1. One endpoint returns both JSON and HTML (per a RESTful controller in Rails, for instance)
  2. A user triggers the AJAX response
  3. The user navigates away from the page
  4. The user presses “Back”

This happens on Chrome and not other browsers, because Chrome is caching the response.

In my case, I was able to resolve this issue by passing “format: json” as a parameter to the server:

  url: url
, data: { format: 'json' }
, success: successHandler

This makes the JSON url minimally different than the HTML one, while still hitting the same endpoint. And it plays nice with Rails controllers, which recognize the “format” parameter already.

Another solution that seems to be pretty popular is to disable caching:

  cache: false

This is doing something very similar; it passes something like “_: 12345” to the AJAX call.

I don’t like the cache: false option for a few reasons. Mainly because it disables caching. Why would I hurt the performance of AJAX requests for all of my users, when passing an extra parameter solves the issue just fine?

Book Review: Photoshop CS6 Unlocked

The Book

Photoshop CS6 Unlocked by Corrie Haffly

My Experience Level

I’m still new to Photoshop. I build web applications professionally, but generally I’m not the guy creating the icons or choosing images. I’m often frustrated by even simple things like touching up a logo or resizing an icon for retina display.

So I am eager to get better, and I spend enough time in the tool that I know that the knowledge from a good lesson is likely to stick.


Right off the back, I got value out of the book. Just the act of going through the toolbar item by item, including the keyboard shortcuts, was surprisingly helpful.

The mystery of how selections can have opacity, and that command-clicking selects different layers, were simple things that I would never have discovered on my own.

I really enjoyed the chapters on image adjustment and manipulation. The discussions around the Levels and Curves tools were especially useful, and I was able to use what I learned to improve photos right away.

The book really shone once I stopped trying to do all of the examples. I got a lot of enjoyment out of just reading the sections, and only doing the exercises that really piqued my interest.

So the good news is, I’m much better at Photoshop now than I was before I picked up this book. In fact, recently when I was designing a UI element, I realized that I could do a mock up in Photoshop faster than I could in HTML. And that makes me very happy.


Two flaws in this book deserve mentioning.

First, some of the Solutions had poor instructions.

For instance, the instructions for creating a “Glass” button gave specific values for the bevel size and altitude, without giving specific dimensions for the button. So by just following along blindly through the instructions there was really no way that I was going to achieve the desired effect.

Second, the order in which some features were introduced didn’t make sense.

For instance, the repeating backgrounds section of the book made no mention of the “Define Pattern…” dialog that was introduced later in the book. This was exactly the feature that I cared about when making my backgrounds seamless, but for whatever reason the author didn’t even mention that feature until much later in the book.

These are just two examples, but I found these two flaws popped up again and again, and sometimes made it difficult to follow along.


I like this book. I’m better at Photoshop now than I was before I read it.

Shovels and Ropes: Playing with HTML5 Audio

The audio tag is pretty awesome. I’ve been kicking around concepts for simple HTML5 visualizations for a while now; this is an early attempt at what I hope will evolve into a more interesting tool set for visualizing songs.

Lay Low on GitHub

This is also a pretty good example of what I like about JavaScript. I’ve been using this project to play around with different uses of lazy iteration.

For instance, in order to fade the colors on the background I’m using a function that is (inaccurately) named “tween”:

This makes it simple enough to gradually apply styles based on the currentTime of the audio track:

Right now I’m using the “timeupdate” event to drive the effects in this song. It’s pretty course-grained - only about 4 updates a second in Chrome - so I’m just smoothing over the rough edges with CSS transitions.

Basic Usage of joins() in Active Record

There is a pattern I’ve run into a few times on my current project, and it’s caught me a little bit every time. It’s simply the problem of joining with multiple objects across a HABTM relationship.

As an example, consider a simple habtm relationship:

With this a lot of questions I might want to ask are already covered. From a Post, I can ask what Categories it belongs to, and vice versa:

The hiccup I’ve been hitting is asking the question:

Give me all the products in several different categories.

The solution I’ve been favoring looks is to create an “in_record” scope to perform the join. It looks like this:

So now I can ask Post.in_categories([1,2,3]) and get an answer.