Recently, I’ve been struggling with a certain situation with ActiveStorage. I use the direct upload option for uploading files to my app; nonetheless, I’ve found that it purges last attachment and generates a new record afterwards. My question is, then: Is there a way to update only the associated blob and link it to the existing attachment instead of purging and reassigning an attachment?
The reason behind this is that I need a way to serve files through short permalinks and, thus, I can’t rely on blobs nor attachments ids for keeping track of a certain file if it becomes updated. Neither can I rely exclusively on filenames, because duplicated names could make older ones unaccessible this way. Lastly, I can’t rely on signed ids because my end users require a short, human-readable URL.
Hey. I’m having some difficulty understanding what exactly you are trying to do.
Attachments are simply a join table. They should be irrelevant from the point of view of your app.
Every new file is assigned a new blob, so even if you just updated the attachment to point to the new blob (which is not possible) the urls would still change (both the public and private ones) because they are based on the blob’s key.
Instead, could you describe the problem you are having (instead of what you believe is the solution) so we might be able to provide an alternative? I think what you want is a stable url even if the file changes?
What my app is required to have is a permalink to blobs. A short and human-readable one (e.g. http:host.com/files/file_id/filename.pdf) that doesn’t change when said blob is updated.
With a lot of tweaking to ActiveStorage routes and methods, I can now access to blobs based on its DB’s id and filename; nonetheless, these blobs change every time they are updated in the associated models, causing the URL to change altogether.
That’s why I thought in 2 ways of solving it:
Find a way where ActiveStorage can update only the blob associated to the attachment to retain the attachment id and keep the file’s name if necessary (e.g. http://host.com/files/attachment_id/filename.pdf).
Based only on filename at upload time, add a constraint to blobs’ table to force filenames’ uniqueness. That way, a URL can be formed in a very distinctive way (e.g. http://host.com/files/filename.pdf).
The reason why this needs to be done is because there are multiple models where blobs are required and, later on, URLs are sent via email to mailing lists and there’s a need to keep a short, human-readable permalink to each file, even when modified in the original model.
I’m sorry if I’m not being really clear, but if something else is needed to clarify, I’ll try to do it as best as I can.
It seems you may be exposing too much of the active storage internals to the end user. How about adding a layer ontop to provide the permalink functionality? Like a new model?
Yeah, I realize I’m practically making a Frankenstein out of ActiveStorage internals.
I thought at first to add a concentrating model for files where it could all be referred from the record’s id (instead of AS tables), but it seems like it would take a lot of overhaul to realocate each of the models’ blobs into this concentrating model, knowing beforehand there are quite a lot of records involved.
Best way is using a concentrating model, like you mentioned, and creating a custom controller.
Model:
class Attachment < ApplicationRecord
belongs_to :record, polymorphic: true
has_one_attached :file
end
Route:
get "files/:id/:filename", to: "attachments#show"
Controller
class AttachmentsController < ActiveStorage::Blobs::ProxyController
private
def set_blob
@blob = Attachment.find(params[:id]).file.blob
end
end
If you don’t want to do that, then your second option works (as long as you are using has_one_attached instead of has_many_attached):
Route:
get "files/:model/:id/:filename", to: "attachments#show"
Controller
class AttachmentsController < ActiveStorage::Blobs::ProxyController
private
def set_blob
@blob = params[:model].classify.find(params[:id]).file.blob
end
end
Hey! Thank you very much @brenogazzola. Just a couple of doubts I have from the code you listed:
Is ProxyController needed for this? I haven’t had an approach to this, I’ve been working with RedirectionController and disk storage. So if you can enlighten me in the pros of ProxyController over RedirectionController for this specific solution, that would be very helpful.
For my actual approach, I’ve set a custom method for dispatching file through streaming; nonetheless, I think this is because of the tweakiness of my code (specially the set_blob and stream_blob), like this:
def set_host
ActiveStorage::Current.url_options = {host: request.host}
end
def stream_blob
response.headers["Content-Type"] = @blob.content_type
response.headers["Content-Disposition"] = "#{ActiveStorage.content_types_allowed_inline.include?(@blob.content_type) ? 'inline' : 'attachment'}"
@blob.download do |chunk|
response.stream.write(chunk)
end
rescue
head :not_found
ensure
response.stream.close
end
In this case, I assume standard AS methods are used; thus, no need to tweak methods for streaming and set the host?
Can I add custom security layer for private files in the set_blob from the custom AttachmentsController? Like excluding files based on certain conditions to decide if request can be processed or otherwise return a “Not found”.
RedirectController should work to. I used the proxy one because it’s what I use. For comparison, proxy was made specifically to allow CDN caching of images. If you are serving files in general, redirect is probably better and it doesn’t keep a puma worker busy while it streams
If you inherit from AS’ BaseController you won’t need to set those as AS should handle it
Yes you can. But instead of inheriting from redirect/proxy controller you might instead inherit from BaseController (it’s what I do for my custom controller)
Thank you very much, @brenogazzola. I finally accomplished an effective approach to solve this situation (with a bit of cyphering to give immutable tokens) and eliminating most of the initial tweaking I had.
Gotta dig deeper into the internals to keep on optimizing.