Active Record connection pool sometimes requires manual massaging

A connection pool should be able to take care of managing your connections on its own, so that the user doesn’t have to worry about it. However, Active Record’s connection pool is built in a way that you can drain the connection limit if you’re using threads.

For example, if we try to run the following code, it will run out of available connections after a timeout of 5s, even though the connections that are checked out are mostly sitting idle:

  adapter:          "sqlite3",
  database:         ":memory:",
  pool:             5, # default
  checkout_timeout: 5, # default

threads = do do
    ActiveRecord::Base.connection.execute "SELECT 1"

ActiveRecord::ConnectionTimeoutError: could not obtain a connection
from the pool within 5.000 seconds (waited 5.001 seconds);
all pooled connections were in use

This is because Active Record assigns connections to threads that checked them out, and doesn’t allow other threads to use it until it’s idle for long enough. I’ve first started a discussion, then opened an issue, but I haven’t received a proper response (and the issue has been closed as “stale”, even though it provides a minimal self-contained example).

This behaviour requires people to sometimes use methods like #clear_active_connections! or #with_connection, especially with background job systems like Sidekiq.

Constrast this behaviour to Sequel, which checks used connections back into the pool, making the equivalent script run without errors:

DB = Sequel.sqlite(max_connections: 5, pool_timeout: 5)

threads = do do "SELECT 1"


I found a relevant topic in from 2015, where Aaron Patterson provided the following argumentation:

Constantly checking out a connection then checking it back in seems like it would cause a performance bottleneck. The assumption is that lock contention will become the bottleneck if we have to constantly check in / check out connections. The other side of that is it will cause more complexity throughout AR as we’ll have to change every place that touches the connection.

However, I don’t see Sequel having problems with performance, and its connection pool implementation is much simpler than Active Record’s (I could actually read it and understand it) :man_shrugging:

I’m not expecting this to be fixed, because people are already relying that two consecutive connection.execute calls will use the same connection object, allowing them to do things like cursor management and COPY INTO/COPY FROM streaming (btw, Sequel solves this by offering a DB.synchronize { |c| ... } API which checks out the connection for the duration of the block). I was just hoping for at least an acknowledgement that this behaviour can produce WTFs, without any clear advantages (besides backwards compatibility).