ceil method

Often times the ceil method is used for patterns that involve grouping. For example, if I have an array of objects and I want to group them in rows of 3, I might do this:

(@objects.size / 3.0).ceil

So if size returns 2, then the above expression returns 1.

Here’s my question. In terms of arithmetic, I dont understand why 3.0 must be used and not just 3. If I was to calculate 2/3.0 and 2/3 in a calculator, it returns same result, after all, since .0 is meaningless. Its like 3.00 or 3.0000 it all means the same. So why does ceil return different results with 2/3.0 and 2/3?

thanks for response

Often times the ceil method is used for patterns that involve grouping. For example, if I have an array of objects and I want to group them in rows of 3, I might do this:

(@objects.size / 3.0).ceil

So if size returns 2, then the above expression returns 1.

Here’s my question. In terms of arithmetic, I dont understand why 3.0 must be used and not just 3. If I was to calculate 2/3.0 and 2/3 in a calculator, it returns same result, after all, since .0 is meaningless. Its like 3.00 or 3.0000 it all means the same. So why does ceil return different results with 2/3.0 and 2/3?

Just try 2/3 without the ceil… as far as I remember dividing int by int returns an int, so you Have to use 2/3.0 to tell ruby that you want to Have a float, dass at least this is True for C, C++ and C#.

Let's ask irb:

1.9.2-p320 :002 > 3/2
=> 1
1.9.2-p320 :003 > 3.0/2
=> 1.5

Walter

To clarify (and expand, thus re-muddying) the preceding answers somewhat:

If a mathematical expression is written with only integers, most
languages will assume that you want an integer answer. How it will
deal with the remainder may vary, such as rounding, truncation, or
"banker's rounding". (IIRC VB or some such thing uses that. The
difference from normal rounding is that normally halves round up,
while in banker's rounding, halves round to whichever way gives an
even number.)

So, you use .0 to turn one of those integers into a floating point number.

You can achieve the same thing with a multiplication by 1.0. You'll
see this used where the numbers are variables rather than literals,
especially in languages that make you declare a variable's type, and
distinguish between integers and floating point numbers.

Either way, in a complicated expression, you may need to be careful
about *which* literal you tack .0 onto, or *when* you multiply by 1.0.
Getting that wrong may wind up with some deeper parts of the
expression yielding integer results, when you really wanted a float.

-Dave

Yeah that's pretty much the key point - that return type of invoking
the / method of integer and passing it a decimal is a decimal type
cast.