Neither referential integrity nor transactions are handled by Rails, but Rails *does* provide methods for handling both elegantly and simply, including niceties such as object rollback in the application on transaction failures.
Also, there seems to be a misunderstanding of speed -vs- scalability that I find all too common in these discussions. While they appear related, *are* related in many circumstances, speed generally comes from efficiency while scalability generally comes from architecture.
Say, for instance, that one system generates 1,000 requests per second per measure of computing performance, while another generates 100.
Which is more scalable?
Be careful, it's a trick question...there is simply not enough information to answer the question!
The important question is: what and where are the bottlenecks?
Say the 1,000 request per second system bottlenecks at the DB at 2,000 requests per second, while the 1 request per second bottlenecks at 10,000 requests per second at the network...
Obviously the 1,000 request per second system is more efficient in terms of $/request/second up to 2,000 requests per second, but does it *scale*?
As to your discussion with your friend: Any work removed from the DB is good for scaling, IMHO. Stored procedures are *efficient*, but scaling the DB is nearly always the most onerous part of a really large web project, so putting work from the DB layer into the application layer is likely less efficient, but more scalable.