ANN: custom_benchmarks plugin

The custom_benchmarks plugin allows you to easily add your own information to the benchmark line logged at the end of each request. e.g.,

Completed in 0.40306 (2 reqs/sec) | Rendering: 0.05003 (12%) | DB: 0.04571 (11%) | Search: 0.16429,1 (40%) | PID: 22426 | 200 OK [http://www.zvents.com/welcome/index\]

Typically, the log line includes the latency associated with executing specific parts of a request. In the example above, we have added a measurement of search latency. But you can use Custom Benchmarks to add any information to the log line. The example above also shows the ID of the process (PID) that served this request. The PID is useful when parsing information from logs that contain data from multiple processes.

== Installation

1. Install the plugin or the gem    $ script/plugin install svn://rubyforge.org/var/svn/zventstools/projects/custom_benchmarks - OR -    # gem install custom_benchmarks

== Simple Example: Logging the Process ID

To add the PID as a custom benchmark field, simply add a custom_benchmark line like the following to your ApplicationController:

class ApplicationController < ActionController::Base   custom_benchmark {|runtime| " | PID: #{$$}" }   ... end

Declare your custom_benchmark with a block that expects an input parameter called runtime. runtime, which isn't used in this example, contains the overall latency of the entire request. Later, we'll show you an example of using runtime to calculate percentage latency below. custom_benchmark expects your block to return a string - which will be inserted in the log file immediate before the status (e.g., 200 OK [http://www.zvents.com/\])

== Complex Example: Logging Arbitrary Latency

Let's say that your application includes a search function that is powered by Lucene. Like SQL calls issued to a database, calls to Lucene can take a while so you want to log your search latency.

The first step is to set up a mechanism that allows you to record your search latency for each request. You can do that with something like this:

class MySearch   @@latency = 0.0   cattr_accessor :latency

  def run_search     @@latency = Benchmark::measure{       # execute the call to Lucene here     }.real   end

  def self.get_timing_summary(runtime)     summary = " | Search: #{sprintf("%.5f",@@latency)} (#{sprintf("%d", (@@latency * 100) / runtime)}%)"     @@latency = 0.0     summary   end end

The run_search method uses Benchmark::measure to record the latency of the search. The get_timing_summary class method, which will be invoked by a custom_benchmark, returns a formatted string summarizing the search latency in absolute and percentage terms. It also resets the value of @@latency to avoid affecting subsequent queries.

Finally, we just need to add a custom_benchmark statement to the ApplicationController:

custom_benchmark {|runtime| MySearch.get_timing_summary(runtime) }

== Bugs, Code and Contributing

There's a RubyForge project set up at:

http://rubyforge.org/projects/zventstools/

Anonymous SVN access:

$ svn checkout svn://rubyforge.org/var/svn/zventstools

Author: Tyler Kovacs (tyler dot kovacs at gmail dot com)

== What's Next? * Default integrations: It would be great if the plugin shipped with a number of default integrations for logging things like acts_as_ferret and acts_as_solr, memcached latency and call count and activerecord SQL call count. * Log File Analyzer: One we have all this great data written to the log, how do we consume it? It makes sense to write an analyzer similar to the production_log_analyzer, but one that can adapt to arbitrary data written to the log (provided it fits a certain format). Maybe even an integration with RRDTool.