I wrapped this up in a simple script that anyone with MySQL or SQLite
and the AR gem can run. It benchmarks AR create vs using the db
connection directly. See attached.
Excerpted results on a new MacBook Pro:
user system total real
raw quoted 0.460000 0.000000 0.460000 ( 0.480184)
create 2.760000 0.080000 2.840000 ( 3.225227)
(Nearly 7 times slower.) I haven't tried profiling the methods yet.
In my experience with typical Rails apps, you'll hit a wall with ERB
template rendering much sooner than with Active Record creation. This
is an interesting pursuit nonetheless -- I'm interested to see what
you all come up with.
When running a test that primarily involves loading up a few MySQL
tables with ActiveRecord objects, I was surprised to see the Ruby CPU
utilization at 93% and the MySQL CPU utilization at 7%. I would expect
this workload to be heavier on MySQL than that.
What is your script doing? Can you post it?
I created a smaller test that I could post that exhibits the same
characteristics:
class PerfTestController < ApplicationController
def index
t1 = Time.now
3000.times do
member = Member.new
member.first_name = 'Fred'
member.last_name = 'Flintstone'
member.address1 = '123 High St.'
member.city = 'Reykjavik'
member.state = 'Michigan'
member.email = 'fred@flintstone.com'
member.save!
end
t2 = Time.now
puts "Time elapsed = #{t2-t1}"
end
end
That took 35.7 seconds (84 inserts per second) on a dual core 2 GHz AMD
Opteron. It pegged Mongrel and MySQL didn't break a sweat.
I just ran another test with a short ruby program inserting records
directly using the mysql gem and it only took 1.6 seconds (1,875 inserts
per second!), and the CPU utilization was as it should be - the MySQL
CPU was ten times as much as Ruby. So it definitely appears that
Rails/ActiveRecord is about 22 times as slow than a straight Ruby
program - wow!
This result makes me feel much better since the performance of Ruby
seems fine. The fact that Rails/ActiveRecord is way slow isn't hurting
me yet, and there is hope it can be sped up since it doesn't appear to
be an inherent problem with Ruby.
Here's the schema for Member:
create table members (
id int not null auto_increment,
created_at datetime not null,
updated_at datetime not null,
first_name varchar(30) null,
last_name varchar(30) null,
address1 varchar(50) null,
address2 varchar(50) null,
city varchar(30) null,
state varchar(5) null,
email varchar(100) null,
home_phone varchar(25) null,
primary key(id)
) engine=InnoDB;
Hi Brian,
I wrapped this up in a simple script that anyone with MySQL or SQLite
and the AR gem can run. It benchmarks AR create vs using the db
connection directly. See attached.
Excerpted results on a new MacBook Pro:
user system total real
raw quoted 0.460000 0.000000 0.460000 ( 0.480184)
create 2.760000 0.080000 2.840000 ( 3.225227)
(Nearly 7 times slower.) I haven't tried profiling the methods yet.
In my experience with typical Rails apps, you'll hit a wall with ERB
template rendering much sooner than with Active Record creation. This
is an interesting pursuit nonetheless -- I'm interested to see what
you all come up with.
Best regards,
jeremy
Hi, are these results in production or development?
Talking about template rendering, I'm just wondering if anyone has thought of pre-processing the template for a production environment. For example, I'd imagine it's might convenient to do things like '<% tylesheet_tag %>' etc when linking in files. But, each time Rails hits one of those, it needs to render it. Would it make sense to have a smart pre-processor that goes through the templates to see what constants there are (like links to the same stylesheet, javascript files, etc.) and even things like "form_start_tag"/ "form_end_tag" etc. to pre-render it so that the number of times you need to create something is reduced.
Has anyone benchmarked the time for rendering a page with many or few such items?
It would be really interesting to have a tool that could pre-render such things for production..
Jeremy Kemper wrote:
> Excerpted results on a new MacBook Pro:
> user system total real
> raw quoted 0.460000 0.000000 0.460000 ( 0.480184)
> create 2.760000 0.080000 2.840000 ( 3.225227)
>
Hi, are these results in production or development?
There is no difference between the two in this case, so the script
doesn't set RAILS_ENV at all.
Talking about template rendering, I'm just wondering if anyone has
thought of pre-processing the template for a production environment.
For example, I'd imagine it's might convenient to do things like '<%
tylesheet_tag %>' etc when linking in files. But, each time Rails hits
one of those, it needs to render it. Would it make sense to have a
smart pre-processor that goes through the templates to see what
constants there are (like links to the same stylesheet, javascript
files, etc.) and even things like "form_start_tag"/ "form_end_tag" etc.
to pre-render it so that the number of times you need to create
something is reduced.
Are there, or could there be, performance tests like this added to the
Rails test suite? It would be great to be able to track performance
like this over Rails releases.
No, sorry: it doesn't load a Rails environment at all, just vanilla
Active Record.
The Rails environment has no bearing on Active Record in isolation,
beyond choosing the default database connection:
$ grep -r RAILS_ENV lib/ |grep -v svn|wc -l
2
Your Rails app reloads application classes during development so some
AR caches, like the per-class table metadata, are wiped as a result.
I don't know about writing records to the DB, but for reading the AR
code is just inefficient as it involves just too many hash accesses. In
pre-1.0 times, I've written a patch that replaced hash operations as
much as possible with array operations[1]. The performance improvement
was noticeable, see the ticket for details. But all that was ages ago
and I haven't updated the patch for the considerable changes that came
with Rails 1.0.