I have a validator that talks to an API and saves the result in a log table in the database.
If the validation fails, ActiveRecord triggers a transaction rollback for the model including the validator. This rollback triggers a rollback of the validator log database write.
Is it possible to move the validator database transactions outside of the model transactions?
Is it possible at all to do database writes inside validators?
Is there some obvious way to do this differently I am not seeing?
There’s three approaches you can take here.
Do this check explicitly outside of model validation, so you’re outside the db transaction. Probably in the controller,
thing.prevalidate && thing.save
Set up a separate db connection for your logging model. This could involve a separate config entry in database.yml, but the naive approach is to just call
establish_connection with no args in your model, and it’ll reuse the current environment credentials. We do this at my day job, and it’s been pretty ok. There’s a gotcha here that this usage is likely to result in broken foreign keys (and if your db reuses ids after rollback, flat out incorrect foreign keys), and another in that transactional tests won’t work right with the logging model (because rails will only auto-transaction the main db connection).
Log externally from the DB, in redis, or to disk, or via email, and then pull reports from those locations instead of from the DB. May be useful in some contexts and entirely inappropriate in others.
While I would shy away from doing any network communication while validating a model, you could use the following to initialize the log record, and have it saved after the transaction is complete. Although you’d probably have to set an
after_rollback to ensure it gets saved properly.
Since what you are doing is somewhat unusual for Rails and Active Record, you will benefit from having all of it be explicitly set up somewhere in some really obvious code and not wrapped in Active Record callbacks. Using callbacks will make it difficult to understand/test/change later on, especially if it’s someone else.
Because many Rails validations run inside an open database transaction, doing stuff like network calls will definitely cause a problem for your app, even at moderate scale. The locking that can be created with an open transaction can be extremely difficult to observe and predict, much less diagnose when “something weird” is going on in your database.
Without knowing the specific code it’s hard to give specific advice, but it sounds like you have a multi-step workflow:
- Accept data from user
- Send some data to an API for validation
- If validation succeeds, write data to DB and write row to the log table
- If validation fails, write nothing, then notify the user of the problem
If you don’t store the data from step 1 into your model, where would it go?
It could go into a job payload, e.g. you queue a job with the data in JSON, the job does the API call, then triggers the right thing.
Or, it could go into a new model built to hold this temporary data. In step 3, you copy the data to your current model (and optionally delete the new model instance).
I will probably use suggestion 2 from @jamie_ca, thanks.
The use case is something like this:
There is a user model. Some attributes of the model need to conform to specific regulations. The regulation checking is being made available via API to centralize the rule set. Support personnel has to be able to look at a history of all regulation API checks performed by the current application.
I wanted to put this check into a validation, so the model can be used in the usual way by others. Some kind of “UserChangeService” would require people to know about its existence and cause problems when integrating third-party libraries that expect to operate on a user model.