Dealing with large chunks of data

Al Evans wrote:

In a site I'm working on, I have to generate statistics on demand.

This requires going through substantially all three tables of the database, without about 1500 records in table A owning about 12,000 records in tables B and C. This is after the site's been up about three weeks, so I know it will get a lot worse.

There's no issue with memory (dedicated server), and the database operation is fast enough (0.2 second or so), but the number crunching is killing me, presently taking about 16 seconds to iterate through all the data and produce the hash that eventually gets displayed.

I've gotten to the point where I'm thinking about rewriting the routine in Perl or even C. I'd much prefer to avoid that.

I would greatly appreciate suggestions from anybody who's confronted a similar situation.

Can you store precomputed stats in the database, updating them as
each piece of data is added?

Or can you automatically calculate and save the stats once a day,
and update these on demand using only the data added since?