wam r. wrote in post #1054916:
I don't know if using pessimistic locking is the best way to do it,
but here is why I used this:
- every X hours a demon runs and updates records
- thing is, this demon "action" can last Y with Y > X
So there is a risk that two instances of the demon try to update the
same record. So each demon needs to acquire the lock before they can
do anything on a given record.
I guess an alternative could be to try and make sure that a demon only
starts if the previous one has finished but this was not an option in
my case.
Adding an index was actually fairly easy.
Thanks for the clarification. I hope I didn't sound condescending in my
previous reply. I was asking because I was interested to know of a case
where pessimistic locking might be useful in a web environment.
This does make some sense in that case. Have you attempted to calculate
the performance effect on user driven queries while the daemon is
performing this pessimistic locking batch update? It would be
interesting to know whether the overhead of acquiring the locks would be
significant compared to using optimistic locking and handing the likely
few unresolvable conflicts that might arise.
I'd imagine that most optimistic locking issues could be resolved by
catching the optimistic failure, merging the changes and rewriting. Of
course that leaves the possibility of a conflict on changes to the same
field, where another strategy might be need. Something like a "last
write wins," or "user changes override daemon changes (or vice versa ).
In a worst case scenario unresolvable conflicts might just have to be
recorded and skipped over by the daemon process.
This might sound "bad" on the surface, but when compared with
conflicting changes between two actual users it's likely not as big a
deal as it seems. In the end it might actually turn out to be safer than
introducing the possibility of deadlocking the database due to a
pessimistic locking scheme.
Of course this all depends on the specific nature of the app (i.e.
whether user changes are fairly isolated, or multiple users often
manipulate the same data)?
In the end though it all comes down to metrics. It's far too easy to
spend too much time optimizing only to find out later that time spent
gained you almost nothing.