Judging from how few resources I have found detailing pessimistic locking usage in Rails - one is inclined to say that optimistic locking is by far the most common approach. Unfortunately optimistic locking simply does not provide any real guarantees once you have scaled past a single dyno (if it did at all).
When you deal with pessimistic locking one thing you have to handle carefully is database deadlocks - here is an example concern: Pessimistic Locking Concern for AR · GitHub
Code implementing this concern may look like so:
Class Foo < ActiveRecord::Base include Lockable LOCKABLE = { on: :grandparent, through: :parent } end
ActiveRecord::Base.transaction do @foo.acquire_pessimistic_lock! @foo.attributes = foo_params @foo.save! end
``
acquire_pessimistic_lock! will grab the lock on the LOCKABLE target (grandparent) making it impossible for other dependencies to manipulate dependent parent states at the same-time (if they could do so, they might race and invalidate AR validations which will still respond as true for the current thread/process using stale data).
The problem which keeps this from being a simple change is that there is a large codebase already written without pessimistic locking - and in fact the call to .save! might not always happen in the same module the changes are written in - since the dirty changes are clobbered on reload one consideration is to re-apply clobbered changes upon reload:
def acquire_pessimistic_lock! # Note that lock! functions will clobber Dirty changes unless transaction_has_lock_for(lock_target) _future_state = lock_target.changes.each_with_object({}) { |(key, val), h| h[key] = val[1]} lock = lock_target.lock! transaction_add_lock(lock) _future_state.each { |attr, value| lock.send(:write_attribute, attr, value) } end true end
``
Another consideration would be to avoid sprinkling the codebase with explicit locking behaviour and to do this implicitly eg: before_save :acquire_pessimistic_lock!Enter code here…
and re-application of dirty state(as above), or by patching write_attribute:
def write_attribute(attr_name, value) attribute_changed(attr_name, read_attribute(attr_name), value) super end
def attribute_changed(*) acquire_pessimistic_lock! end
``
I have so far not been very happy with any of the variations I could come up with, especially while implicit pessimistic locking maybe nice as it hopefully eliminates human-error as a potential bug-source, it may have some rather weird behaviour along the edge-cases – for example (below) - .lock! will call reload! which will scrap the child reference from @parent including any possible changes.
ActiveRecord::Base.transaction do @parent.child.status = ‘archived’ @parent.lock! @parent.child.save! end
``
Most of the issues with the above come from acquiring the lock on the resource after the resource has been loaded (rather than in the initial select) - it’s tempting to want to add a patch to define_callbacks in ActiveRecord::QueryMethods but it’s not very simple - since the resources aren’t resolved until the entire query_chain is loaded (also ActiveSupport::Callbacks would need a wrapper to work correctly with modules in this case), so run_callbacks in the :lock method would be insufficient for implementing something like the lockable concern where the lock target can vary based on resource class.
Since data-integrity is a major concern and I think also a framework level as well as an application level concern - and Pessimistic locking has been left rather open-ended I was hoping to get some feedback / best-practice recommendations from Rails developers and others that have dealt with similar issues. Also it would be great to know if there any plans to implement any kind of standard for Pessimistic Locking as part of AR in the future.
Alex