Yes, that at any rate. However, don't confuse transaction blocks with
multiple exclusion blocks. A transaction, by itself, doesn't preclude
concurrent users/processes from reading and updating the same data. The
potential for conflict arises only through concurrent updates.
The race condition in your unadorned code results from a difference
between time of check (state == x) and time of change (state = ...).
There are two strategies to avoid the resulting inconsistencies:
Optimistic locking, as Fred indicated in another reply. With optimistic
locking, you run headlong into the race condition, but when writing to
the database you ensure that it can only succeed if is based on
consistent data. On updating a record, ActiveRecord checks that the
updated_at timestamp of the record as currently stored is the same as
the timestamp when the object was read. An identical timestamp indicates
that there haven't been any intervening updates.
Pessimistic locking is another way. You can either #lock! an object you
already have or #find(..., :lock => true) get a locked object to begin
with. Locking an object like this precludes any changes to the
corresponding database row until the locking transaction is either
committed or rolled back.
Rails gives you optimistic locking automatically for tables that have
the requisite timestamp columns (updated_at). Pessimistic locking you
have to do explicitly. As a guess, I'd say that pessimistic locking is
only worth your and the database's effort if conflicts are likely.
At any rate, with both locking strategies you have to take into account
the possibility of a conflict. With optimistic locking, you get an
ActiveRecord::StaleObjectError exception in that case. I'm not sure
about pessimistic locking, but I guess you'll get an indistinctive