pagination caching

Hi

I have the following method:

def search
  products = Product.find(:all, :conditions => ["category = ?",
params[:category]], :limit => 1000, :order_by => "float_rating DESC")
  @shown_items = my_pagination(products, {:per_page => 20, :page =>
params[:page]})
end

There are 6 values for category. The order_by term (which I really
need) slows down the performance.

So I'd like to cache the action. The problem is that it has two
parameters, :category and :page, but I only know about caching with
one parameter (which will be named :id). How can I do that without
writing 6 different methods in the controller?

Thanks

Luma

Hi

I have the following method:

def search
products = Product.find(:all, :conditions => ["category = ?",
params[:category]], :limit => 1000, :order_by => "float_rating DESC")
@shown_items = my_pagination(products, {:per_page => 20, :page =>
params[:page]})
end

There are 6 values for category. The order_by term (which I really
need) slows down the performance.

Don't know about the caching, but perhaps the action itself can be
made faster.
What indexes do you have? If you have a multicolumn index on category,
float_rating then that should help. It's also inherently a little
wasteful to be fetching 1000 items but only displaying 20.

Fred

There is an index on category and float_rating. I'm trying to improve
the action itself. My approach is counting the results first, and
using :offset and :limit in the find-query. It's only a bit faster. Is
there a better way, or can I improve the code?

def search
  paginate_with_offset(Product, 20, params[:page], "float_rating
DESC", 1000, {:conditions => ["category = ?",
params[:category]]})
end

def paginate_with_offset(search_class, per_page, page, order_by,
count_limit, options = {})
  @item_count = [search_class.count(:all, options),
count_limit].compact.min # count_limit can be nil
  @page_count = (@item_count.to_f / per_page).ceil
  if @item_count > 0
    @page = begin Integer(page) rescue 1 end
    @page = 1 unless (1..@page_count).include?(@page)
    @offset = per_page * (@page - 1)
    @last = [@offset + per_page, @item_count].min
    limit = (@last-1) % per_page +1
    options = options.merge({:offset => @offset, :limit => limit})
    options = options.merge({:order => order_by}) if order_by
    @items = search_class.find(:all, options)
  else
    @items = []
  end
end

Luma

There is an index on category and float_rating. I'm trying to improve
the action itself. My approach is counting the results first, and
using :offset and :limit in the find-query. It's only a bit faster. Is
there a better way, or can I improve the code?

Sorry to insist but is their an index on both category and float
rating (and in that order). That's completely different to having one
index on category and one index on float_rating, and should make a big
difference.

Fred

hi Fred,

The index really includes (category, float_rating) in that order.
Indeed, I don't really know how indexes work :wink:

Martin

That's surprising. It's only sorting 1000 items at most. What is the
performance difference with and without the order?

If I had this problem with Microsoft SQL Server, I'd find out what
query the app was sending to the database, then look at the query's
execution plan for gross inefficiencies. For all we know, your index
isn't even being used. I don't know if you can look at execution plans
in typical Rails databases like MySQL.

///ark

Hi

with order_by:

...
  e[4;35;1mProduct Columns (0.010000)e[0m e[0mSHOW FIELDS FROM
productse[0m
  e[4;36;1mSQL (6.549000)e[0m e[0;1mSELECT count(*) AS count_all
FROM products WHERE (category = 0) e[0m
  e[4;35;1mProduct Load (783.137000)e[0m e[0mSELECT * FROM products
WHERE (category = 0) ORDER BY float_rating DESC LIMIT 0, 20e[0m
...
Completed in 802.56400 (0 reqs/sec) | Rendering: 11.11700 (1%) | DB:
790.03600 (98%) | 200 OK

without order_by:

...
  e[4;36;1mProduct Columns (0.010000)e[0m e[0;1mSHOW FIELDS FROM
productse[0m
  e[4;35;1mSQL (0.261000)e[0m e[0mSELECT count(*) AS count_all FROM
products WHERE (category = 0) e[0m
  e[4;36;1mProduct Load (0.130000)e[0m e[0;1mSELECT * FROM products
WHERE (category = 0) LIMIT 0, 20e[0m
...
Completed in 1.77300 (0 reqs/sec) | Rendering: 0.54100 (30%) | DB:
0.52100 (29%) | 200 OK

I'm using Mysql 5.0, and the following gives some information about
the execution:

EXPLAIN EXTENDED SELECT * FROM products WHERE (category = 0) ORDER BY
float_rating DESC LIMIT 0, 20

id select_type table type possible_keys key key_len ref rows
Extra
1 SIMPLE products range index1,index2 index2 17 NULL 155568 Using
where; Using filesort

EXPLAIN EXTENDED SELECT * FROM products WHERE (category = 0) LIMIT 0,
20

id select_type table type possible_keys key key_len ref rows
Extra
1 SIMPLE products range index1,index2 index2 17 NULL 155568 Using
where

I guess that index2 is taken in both cases.

index2 includes (category, float_rating) in this order.
index1 only includes category. I use it in another place.

What might be a useful information: The half of the 3 mio entries of
the table have category = 0. But does this matter if I'm selecting
only the first 18?

Martin

What might be a useful information: The half of the 3 mio entries of
the table have category = 0. But does this matter if I'm selecting
only the first 18?

I mean "only the first 20".

OK, the problem is 'Using filesort.' From a very brief google, this
can be the kiss of death. Unfortunately, I don't know how to solve
this problem, but if someone has solved it, you should be able to find
it.

///ark