I’m finally upgrading one of my apps to Rails 2.3.5 from 2.1. One thing I’ve noticed is that I’m getting terrible performance with MySQL under Rails 2.3. The 2.3 version runs 10x slower than the 2.1 version. Same database, almost the same code, different version of Rails.
Looking at the output from Rails-footnotes, which gives me the time for each query, after about 20 queries in 2.3, the time for each query jumps to 100ms, instead of 1ms or 2ms. After the first 20, all queries take longer, until the next request when it starts all over again. Below are the timings running under 2.3 (I’m only showing the section where the transition occurs).
SELECT (explain | trace)
SQL (0.001050s)
SELECT count() AS count_all FROM event_registrations WHERE (process_state = ‘accepted’ AND archived_on IS NULL) AND (event_registrations.event_workshop_id = 168)
SELECT (explain | trace)
SQL (0.000953s)
SELECT count() AS count_all FROM event_registrations WHERE (process_state = ‘pending’ AND archived_on IS NULL) AND (event_registrations.event_workshop_id = 168)
SELECT (explain | trace)
SQL (0.001084s)
SELECT count() AS count_all FROM event_registrations WHERE (process_state = ‘waitlisted’ AND archived_on IS NULL) AND (event_registrations.event_workshop_id = 168)
SELECT (explain | trace)
SQL (0.071566s)
SELECT count() AS count_all FROM event_registrations WHERE (process_state = ‘rejected’ AND archived_on IS NULL) AND (event_registrations.event_workshop_id = 168)
SELECT (explain | trace)
SQL (0.069581s)
SELECT count(*) AS count_all FROM event_registrations WHERE (process_state = ‘canceled’ AND archived_on IS NULL) AND (event_registrations.event_workshop_id = 168)
SELECT (explain | trace)
SQL (0.070729s)
SELECT count(*) AS count_all FROM event_registrations WHERE (process_state = ‘noshow’ AND archived_on IS NULL) AND (event_registrations.event_workshop_id = 168)
Notice that the 6 queries are basically the same, but after the third one, the time increases significantly. btw, for this set of data, there are actually no records that match any of the conditions, so it’s not from a variation in the returned data set.
I’m totally stumped. Is there some new setting in ActiveRecord that could cause this? I can’t debug my app - it takes between 20 and 80 seconds to load a page, based on the data. Under Rails 2.1, it’s never more than 10 seconds.
I'm finally upgrading one of my apps to Rails 2.3.5 from 2.1. One thing
I've noticed is that I'm getting terrible performance with MySQL under Rails
2.3. The 2.3 version runs 10x slower than the 2.1 version. Same database,
almost the same code, different version of Rails.
That's fine. I ran into an issue because of a database store that
wasn't activerecord store. The issue was that the session store wasn't
returning connections to the connection pool, so after a few normal
page loads all of the connections are marked as busy and you start
having to wait for timeouts before subsequent actions can get a
database connection.
Thanks for the suggestion. I tried it out, but it didn’t help. I pushed pool up to 150 and wait_timeout to 1, no real change.
I’m really stuck with this. And the performance is so bad that it’s extremely painful to do any work on the app. Let me know if you have any other ideas.
Well, I’ve found a reason for the problem, but don’t know how to solve it yet. But I suspect it’s a Rails bug.
I found this statement over at the mysqlplus_adapter page (http://github.com/methodmissing/mysqlplus_adapter) : “the mysql ruby gem forces GC every 20 queries, that’s a guaranteed GC cycle every 5th request for a request with a 4 query overhead.”
Sure enough, if I count the number of queries that run quickly, there are 20 of them. All subsequent queries take 10x as long.
I looked at the Mysql gem source code, changed the GC_STORE_RESULT_LIMIT from 20 to 200, and recompiled it. Running my test again, now 200 queries run blazing fast, all others are 10x as slow. So whenever store_result_count > GC_STORE_RESULT_LIMIT, the garbage collector is run. Since it seems to be getting run every time after GC_STORE_RESULT_LIMIT, it seems like the store_result_count is never getting decremented, which means it’s never getting freed.
The reason I think this is a Rails bug is the fact that if I run the code on Rails 2.1, there isn’t a problem. Using Rails 2.3, with the same mysql gem and mysql server, resources are not getting freed, causing constant garbage collection.
The reason I think this is a Rails bug is the fact that if I run the code on
Rails 2.1, there isn't a problem. Using Rails 2.3, with the same mysql gem
and mysql server, resources are not getting freed, causing constant garbage
collection.
Sounds promising. (although looking at the source, store_result_count
should be decremented when the ruby garbage collector eventually
collects the object). Can you reduce your example to a little script
that just uses active_record (ie outside of a rails app)
Well, I’ve come closer to tracking down the problem. Running only ActiveRecord, the problem does not occur. So I created a fresh Rails 2.3 project and started adding the plugins I use. Turns out rails-footnotes was causing the slowdown.
Comment this line out, and all works fine. However, I can’t find where this call, execute_without_analyzer, is actually defined. It seems like it might be behaving differently between Rails 2.1 and 2.3
That looks like someone has been using alias_method_chain - there is
no method called execute_without_analyzer. alias_method_chain renames
the original execute method to execute_without_analyzer