Avoiding duplicate queries for concurrent requests for an unpopulated cache key, I.e, Cache Stampede

Does anybody know why Rails.cache.fetch does not support handling for cache stampedes?

Ruby on Rails supports a race_condition_ttl option useful for cache values that will expire. However, it doesn’t help with the case of the initial population of a cache key.

I’m working on an app with 20+ Heroku P-L dynos, thus running hundreds of concurrent puma threads. After deployment, many cache keys change to reflect a new release.

If 50 threads request the same cache key simultaneously, all 50 threads will invoke the same complicated queries, putting excessive load on the database.

Has anybody seen any solutions in the Rails community for this behavior? Would this be a beneficial addition to ActiveSupport::Cache.fetch?

The complicated part of the distributed lock can be handled using a library like redis-semaphore/semaphore.rb at master · dv/redis-semaphore · GitHub.

Maybe this is beyond the scope of default Rails? or perhaps it’s an excellent addition?

References

What is a cache stampede and how we solved it by writing our own gem

If you are not setting some sort of lock to re-calculate cache while using probabilistic early expiration, then you will end up with multiple processes hammering your cache and underlying systems computing and re-writing the same value.

What do you think cache warming? if you consider multiple requests simultaneously, it needs to warm up cache.