(Context: once in a while I bump into something which seems “obviously wrong” on first sight, but my experience has taught me that, with dozens or more smart people working and using this, I’m probably just lacking context and the “obviously wrong” thing actually makes sense)
Let’s assume we have a job MyJob
with
perform(param1:, param:2)
now, when instantiating or enqueing it:
job = MyJob.new(param1:)
ActiveJob.perform_all_later([job])
# or
MyJob.perform_later(param1:)
the enqueing happens without issue, but the task fails because of the missing keyword param:2
during execution.
I’m interested in why was it designed like this?
My understanding so far is this:
Let’s assume we have a web request (user submits a form) and this request is supposed to perform a heavy operation in the background. The request will succeed, but the job will fail. So now, the dev can see what’s wrong and “update” the enqueued job, and trigger it to run again? But that’s only if the missing data can be found. I can imagine there would be some cases where app developers would prefer to “fail early”.
Another reason: maybe it’s too hacky to inspect the signature of the perform
method, and it’s not worth it?
Or perhaps it is not how we’d prefer it to be, but with backwards compatibility in mind it’s been decided it’s not worth touching (yet)?
I didn’t manage to find relevant discussions on the internet, and I hope there’s someone here that remembers discussions and deliberations about the tradeoffs.