Maybe a simpler way to resolve this is to deprecate
validates_uniqueness_of altogether and have it direct the users at creating a unique index on the table instead?
Often you want both. In the Elixir world, Ecto’s original way of ensuring uniqueness was to try the insert, notice if it fails due to the unique constraint, and turn that db error into a user-friendly one. This is still the “main” way we do it.
However, imagine you’re trying to guarantee uniqueness of a username. You go to insert a record when somebody signs up, and the name is taken. When was that name taken? 99.999% of the time, it was taken yesterday or a year ago or whatever, not 10 milliseconds ago in a concurrent request. (This is why the validation works OK most of the time, especially for low-traffic sites; you might not notice the possible race condition until you start getting many sign-ups per second.)
You need the index for that rare race condition, but the "do a
SELECT to check for it before we do the
INSERT" validation approach will catch the vast majority of cases, and it will catch the uniqueness issue at the same time as other validation issues (eg blank first name), whereas if you violate multiple database constraints with an
INSERT, the database (at least this is true for PostgreSQL) will only tell you about the first one it notices. And trying the
INSERT over and over to find all the issues isn’t a great workflow.
Ultimately we added an
unsafe_validate_unique function to Ecto to cover this. The name tells you not to rely on it, but it can improve user experience if you have that in front of your uniqueness constraint.
Using uniqueness validations without an underlying unique index should be warned.
I think that’s the right approach. And turning the
RecordNotUnique exception into a user-friendly error should be the default, IMO.