ActiveRecord timestamp conversions fail for some cases

Hi all,

The microsecond handling in ActiveRecord::ConnectionAdapters::Column#fast_string_to_time and ActiveRecord::ConnectionAdapters::Column#microseconds fail for some values.

In slightly more than 1% of all possible 6-digit cases, writing a timestamp to a database column and then reading it back in results in a different value being returned to the program.

So, for instance, saving the timestamp

  2010-01-12 12:34:56.125014

and then loading it again from the database yields

  2010-01-12 12:34:56.125013

The problem occurs when the value read is converted from string form to a Ruby timestamp, so it is largely database independent (the exception being drivers that override the methods, or databases that don't support timestamps at all).

The underlying problem is the use of to_i to convert from floats to ints inside the affected methods. As you know, to_i simply truncates the result and in some cases this causes rounding errors introduced by inherent inaccuracies in the multiplication operations and decimal representation to bubble up and affect the least significant digit.

Here's a simple test that illustrates the problem:

     converted = ActiveRecord::ConnectionAdapters::Column.send('fast_string_to_time', "2010-01-12 12:34:56.125014")
      assert_equal 125014, converted.usec

This test case (and a similar one for #microseconds) fail on plain vanilla Rails 2.3.5.

I guess the best solution would be to change the ISO_DATETIME regex used to extract the microsecond-part from timestamps to not include the decimal point at all and then avoid the to_f and subsequent floating point multiplication completely inside the failing methods. However, these regexes are defined as constants on ActiveRecord::ConnectionAdapters::Column::Format and therefore publicly available, so the impact of changing these is difficult to ascertain.

A simpler solution is to use round() instead of to_i to convert from the intermediate floating point result to int. This works (I have verified that the precision is sufficient for all possible 6-digit cases) but is about 15% slower than the current method. A small price to pay for correctness, in my opinion.

I have attached a tiny patch (against 2.3.5) that switches the code to using round() and a test case that verifies that the method works for a few problematic cases that fail without the patch.

I have also created a Lighthouse ticket #3693:

Could some of you please take a look and see if the patch is acceptable and maybe carry it into the code base?


fix_microsecond_conversion.diff (2.93 KB)

I gave up on that kind of resolution when I found that MySQL doesn't
support it! I'll try to test the patch though.


Thanks for the response -- we use PostgreSQL which does store all six microsecond digits.


I plan on taking a look at this too. I think I had to solve it in the SQL Server adapter in my own way since it only stores milliseconds. If you care to look at our code, here are a few key sections in the tests.

The problem only manifests itself if the sub-second part is stored with 4 or more significant digits. So if you only store and retrieve milliseconds (and thats the way I read your code) you should be home safe.

Awesome, thanks for the reply.

- Ken

I spent an entirely inappropriate amount of time digging into this

Here are some observations:

The #microseconds method LOOKS clumsy, but it's actually quite
optimized. Turns out the the sec_fraction component is a Rational
less than one. So.... converting as quickly as possible to something
other than Rational (Float, in this case) is actually the right thing
to do. I tried dozens of alternatives and benchmarked them all and
could not beat the current implementation -even after changing the
#to_i to #round. Arguably, the modulo division step is superfluous -
but only if you always pass in rationals less than one.

On the other hand, The regex parsing of time strings in
#fast_string_to_time is both clunky and incorrect for some values (as
noted and fixed by Jacob). I've taken the first approach hinted at by
Jacob and used a more direct regex to return only integers. The
result is a method that is faster (roughly 5%), accurate (no more
truncation issues) and arguably cleaner (no more double conversion).
It should be completely backwards compatible as well -I've defined a
new constant.

The benchmarks are here:
And the patch is attached to Jacob's ticket here: