I was studying the difference between Sequel’s and ActiveRecord’s connection pool, and I was curious why doesn’t ActiveRecord automatically release connections back into the pool after it was used for a query? Currently it stays checked out and assigned to the original thread, so other threads cannot use it, even when the original thread died.
To illustrate, the following script executes 10 queries in parallel in 10 threads, but because the pool size is 5, half of the threads will fail because they couldn’t acquire a connection:
require “active_record”
ActiveRecord::Base.establish_connection( adapter: “sqlite3”, database: “database.sqlite3”, )
threads =
10.times do threads << Thread.new do ActiveRecord::Base.connection.execute “SELECT 1” end end
threads.each(&:join)
``
ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.003 seconds); all pooled connections were in use
``
Compare that to Sequel, where everything goes just fine, since it releases connections back into the pool as soon as the query finished:
require “sequel”
DB = Sequel.sqlite(“database.sqlite3”, max_connections: 5)
threads =
10.times do threads << Thread.new do DB.run “SELECT 1” end end
threads.each(&:join)
``
ActiveRecord works around this limitation by requiring users to call ActiveRecord::Base.clear_active_connections!
at the appropriate place (end of request in a web app). In Rails this is handled automatically for you (though at a surprising place – in ActiveRecord::QueryCache.complete
), but you need to remember to do it when using ActiveRecord in other web frameworks. In Rails this started as a specialized Rack middleware called ConnectionManagement, but has since been moved into ActionDispatch::Executor middleware.
My question is whether the current connection pool behaviour is intentional (maybe there is some advantage I’m not seeing?), or it’s just that no one volunteered to fix it yet?