Is it worth caching it?

Hi, I wonder what do you think. Is it worth caching it?

There are no calculation, no calls. The object is already loaded. Do you think it is worth caching this string or it should be generated every time.

<% cache(cache_key) do %>
  <%= content_tag :article, id: dom_id(object), class: "" do %>
    <% image_url = object.picture.url(size) %>
    <% title = object.title %>
    <% description = object.description %>
    <a class="..." href="<%= url  %>">
      <figure class="...">
        <img class="..." src="<%= image_url  %>" alt="Image for <%= title %>">
      </figure>
      <div class="...">
        <h3 class="...">
          <span class=".."><%= title %></span>
        </h3>
        <p><%= description %></p>
      </div>
    </a>
  <% end %>
<% end %>

Update: It is not. Cache is about 9 times slower than just concatenating this string. - Is it worth caching it?

You can see how long it takes to render views & allocations-per-view in logs, so you can test if caching has any impact. If your view has a lot of allocations it might make sense to cache it, but everything still needs to be measured.

Also it’s worth looking in full-page context. If the whole page renders quickly enough, those small performance gains might not be a good trade-off for increased code complexity.

This looks like a partial that’s rendered from a collection, if so, then yes this is a prime example of where caching would eliminate N+1 performance problems. That object.picture looks like an N+1 on the surface.

Thanks. If the object is loaded like Object.joins(:picture) it wont have the n+1. If there is no additional request in this partial, do you think it is worth caching it?

If you’re just starting off and trying to get something functional, then no. But as the complexity grows, as needs of that partial grow, as other relationships that are hanging off your record increase, caching will become invaluable.

There are reasons beyond N+1s that caching helps with. You’ve got to look at the entire picture when trying to answer that question. Cherry picking a single partial with no details on the entire request’s rendering load, or what the underlying model code is doing is not enough.

I am facing the opposite problem. We have a 7 years old platform. We are building a second platform and we run both platforms with the same code. We’ve been successfully caching for the last few years, but now we are rethinking and revising parts of the platform and this includes caching.

This partial could be shown in an index page or could be shown on other pages. What I understood is that we are over caching. We are caching this partial. So there is a request to a cache. I can not imagine how a request to an external service (heroku mem cache in this case) would be less expensive than a concatenation of a string.

I can not directly measure it for this case, … because reasons, and this made me wonder. Should we? I will try to measure it.

I have to apologies. It was easier than I though to measure it.

Looking at new relic data for the last 7 days the cache is taking up

| Category  | Segment                           | % time  | Avg calls (per txn) | Avg time (ms)
| Database  | Memcached get                     |   2.0   |        4.01         |  9.11 
| View      | sections/_section.html.erb Partial|   0.3   | 0.0637              |  1.16

It seems that rendering this partial is ~9 times faster than getting it from cache.

Effective caching is more art than science. On a legacy app like yours, before the days of russian doll caching and using mult-fetch to retrieve an entire collection of cache fragments within a single memcache call, it was quite common for people to go cache crazy (similar to pre-optimization) and end up with a mess of individual calls to memcache that would often times slow the page down.

Rip it out and see how it goes.

Yes, that’s exactly what I am going to do. Thanks

1 Like