How do I run an action on a dedicated thread?

I run Rails on quite a complicated situation but the gist of it is, in my application, it is possible that an action can call some external services which fetch data from another action via another http call to the application itself. (Basically a cyclic http call)

It is by design and there is no other obvious way around it.

To prevent a deadlock in production, I wanted to make sure that, there is always a single thread available to run those certain action. Basically, I want to dedicate one thread to run that action, and no other actions.

Did you know how can I accomplish this?

It sounds like this might be best served by encapsulating the caller in a job and then setting up a dedicated queue for that job in your background processes, if I am understanding correctly what this is. If you need to have the results display when complete then you could set up a poller or a channel to listen to it. By shifting it to the background you can prevent a deadlock.

Thank you for the reply, but the client is not a web browser, so no javascript and no polling possible, plus not in our control.
To be more specific the call to the external service is via a headless browser, not even a direct HTTP call, and you cannot push the data along with the request made by the headless browser.
As I said, there is no obvious way around it business-wise, but it is quite impractical to explain the whole situations, plus some NDA involved.


It sounds like you might want to isolate this into a micro-service then. Honestly, and architecturally, it sounds like this process flow might need a redesign, especially as it relies on a resource external to the main application. I am guessing that this is legacy?

Is this application accessed from command line normally?

Architecture wise the application is way better than average Rails apps IMO.
The dependency direction is one way internally, with no cyclic dependency. (even with the ActiveRecord class, that in almost all other Rails app contains cyclic dependency between models)
Network perspective may suggest otherwise, but the business logic is truly not relying on any external service, completely agnostic to user interfaces, data storage, and external network traffics.
The unit tests run blazingly fast because they are genuinely side-effect-free, including side-effects from the database. (No mocking, no database cleaner, no factory bot/fabrication gem, no-nonsense)

The application is not a legacy one, and I can vouch for it that there is no problem with the software architecture.
It is the business that creates a weird flow of traffic, and the reasons are legit. (It is weird in a sense that it is quite unusual for Rails app, but not in a bad sense.)

This is an enterprise world. And I am in the process of open-sourcing the tool I use to manage all the complexity.
If you are interested, you peek into the architecture of this application with gem “midnight-rails” (used as the main backbone of this application), the source code is now available but currently no documentation.

The problem I have can be solved at the expense of DevOps operation, I wonder if there is the other way around using just the ruby code.

San, I hope you know I meant no offense. This is a strange issue to have and is normally indicative of an architectural design flaw and without having more details that might violate your nda, we can only go based on facts entered into evidence, to borrow a phase from legal.

An application can be internally perfect, with a flawless deployment, but the external communication/network aspect is the gotcha.

Unfortunately, I am not familiar with any way build right into rails to do this outside of backgrounding and/or micro-servicing the process that invokes the external. Generally, that is how one prevents the application from locking when dealing with things that are outside ones control. The major reason for this is that runners like puma/unicorn/insert yours here want to be agnostic themselves. If you have the ability to do it from a dev ops perspective, then that would likely be the best option for it if you can not isolate it out some other way.

No worry
And thank you again.

I also asked the Puma guy.
They basiaclly said they receive this inquiry frequently, and they don't support it as they did not use it themselves.

I am still open for any pointers and suggestions. Maybe Puma guys just don't want to look deeply into it.
It is the Ruby community after all.

My default action now is to create multiple Puma instances (Not just adding a worker)
I wonder if there are other ways that did not incur burden on the operation team.

? There's already a concrete suggestion in the GH issue: "use JRuby"
(or presumably Rubinius). Have you tried that?

Hi Hassan

I don’t because that would not solve my problem.
He was describing a kind of more usual deadlock caused by the global interpreter lock.
Where Puma send the cyclic request to the same multithreaded worker, but the worker was unable to process the latter request because of the GIL. (I guess)

While in a single-threaded config Puma would not send cyclic requests to the originating worker in the first place.

In contrast, I was describing the deadlock caused by the cyclic request itself. While less usual, this is still possible.

All workers can be occupied without a single one of them handling the cyclic requests causing the indefinite wait. (Until one of them timeout)

I want to solve this systematically, not statistically. (I must admit that the latter would be a practical option too.)

Plus I am seeking less hassle, not more. JRuby/Rubinius are relatively worst options (in my situation) compared to just spinning up multiple Puma and have that dedicated pool of workers handling the cyclic requests.

San Ji,

I think you have the situation where a user U sends a request to Rails app A which in turn calls a service B. Then, service B makes a request back to Rails app A, presumably for some additional information from the DB. You want to make sure the second request from B to A does not block.

If the above understanding is correct, then using Puma in multithreading mode will work as long as you have enough workers. (We use this approach in production for a similar architecture). The GIL is not an issue, I think it is thread (worker) specific.

Alternatively, if you want to use Puma in single threaded mode, you can run a second copy of the Rails app (say A2) either listening on a different port (eg 8001) or on a separate, dedicated URL (eg and then send the requests from B to A2 at the second port or hte second hostname. We started with this approach using a second port before adopting Puma.

Hope this helps.


Thank you, Phil
The latter suggestion is actually what I planned to do, with nginx handle the path routing instead of having separated endpoints.

May I ask why did you switch away from the approach?

IMHO, the approach is safer than mixing every request up in a single pool.

As I described, the locking is quite rare but possible, probably won’t show up at all in practice, but still.

About 30 seconds of googling found that “IO operations such as database calls, interacting with the file system, or making external http calls will not lock the GIL.” Unless you’re frequently using up your thread pool and forcing requests to wait for an available worker, it seems like you shouldn’t have a problem. If you are filling up your thread pool and forcing requests to wait, then you have a different problem you should maybe solve rather than bandaid. Unless your actual use case is bursts of simultaneous requests which fill your worker pool no matter how large, it seems like you might be over-thinking this.

Jim Crate

Jim, good point
The assumption is not from me, but from me guessing from a response by Puma’s member.

Anyway, I already asked him to clarify why would a lock occurred.

You can follow how it goes here:

I genuinely believe I am not overthinking this because people encounter this phenomenon in the wild. (albeit not in my apps)

Depends on how vital the application reliability is, they can ignore the locking request, waiting for it to timeout, and go on continue running that code or solving something else matter more for them.

This is not always an option.

San Ji,

We use the former because it is easier to configure and also works in dev and test environments without nginx.