ActiveRecord::RecordNotUnique when attaching existing ActiveStorage::Blob to another model

Hi y’all,

I was meaning to dig into this further to see if this is a bug or something I was doing wrong/setup, but the “A May of WTFs” was announced so I figured I throw it out there.


  • Rails edge
  • Postgresql backend, specifically in production
  • Using pg gem version 1.2.3
  • S3 compatible backend for ActiveStorage store

Sample model:

DraftProject --> has_many_attached :files Project --> has_many_attached :files

Problem: I’ve noticed that sometimes when I’m attaching the blobs from the DraftProject (through files) to the Project model that ActiveRecord::RecordNotUnique is raised. (ie. The blob is already attached to DraftProject, and now I also associate the same blob to Project).

Looking through the logs, it looks like Postgres is returning an error on this index: [“record_type”, “record_id”, “name”, “blob_id”] and throwing a PG::UniqueViolation. When I query ActiveStorage::Attachment with the offending parameters, no records are found (ie. there is actually no unique condition violation).

Right now, I’m getting around this with a rescue block like so:

draft_project.files.each do |file|
  rescue ActiveRecord::RecordNotUnique, PG::UniqueViolation => e

This seems to be able to fix it. Would appreciate the community’s thoughts on this. Is it a bug in Rails, in Postgres, in the way I’m implementing the code, or something else?

For reference, the database’s product (Postgres) DB is running @ Digital Ocean under their managed DB product.


I know that one of the developers on my team struggled with duplicating ActiveStorage attachments recently – let me see if he’d be willing to share his experiences here.

Possible stupid question – are DraftProject and Project backed by separate tables? If DraftProject is inheriting from project using STI, I can see something going wonky in terms of how ActiveStorage is determing which record_type to store.

My best guess other than that is that autosave is firing twice. (There have been a few other autosave WTFs.) You wouldn’t be able to see it afterward because of transaction rollback. You might not even be able to see it during, because prior to transaction commit the record would only be visible to the PG connection running the transaction.

I had a similar issue with duplicate attachments. When I relied on the attachment to be generated it stubbornly referenced the original record_id even when attached to a new record. Though I believe there was a uniqueness constraint that explained why it wasn’t being persisted. I ended up manually creating an attachment that referenced the new record which seemed to do the trick. Hope this helps.

DraftProject and Project models are not inheriting from the same table (ie. STI).

The ActiveRecord::RecordNotUnique / PG::UniqueViolation only happens sometimes, which makes me think this is something happening at the database layer (vs a Rails code thing). I’m wondering if there’s some locking involved in #attach method (or the upstream metaprogramming it calls).

In regards to the save thing happening twice – that would make sense but I can’t see where I’d be calling the #attach method twice. I know (from some StackOverflow / GitHub searches) that some had stumbled onto this issue using Direct Upload, which isn’t happening here. In this case, the files are attached directly on an existing blob. Let me know if you’re aware of any other instances where such a double calling might occur.

@Jhovahn’s suggestion is also a good workaround. Regardless, the engineer in me would like to figure out why this happened. :slight_smile: