Attachment_fu S3 uploads killing mongrel

I was wondering if anyone here has seen a similar error to this...

From mongrel.log

transactions.rb:85:in `transaction': Transaction aborted
        from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
configurator.rb:293:in `call'
        from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
configurator.rb:293:in `join'
        from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
configurator.rb:293:in `join'
        from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
configurator.rb:293:in `each'
        from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
configurator.rb:293:in `join'
        from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
mongrel_rails:136:in `run'
        from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
command.rb:211:in `run'
        from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
        from /usr/bin/mongrel_rails:16:in `load'
        from /usr/bin/mongrel_rails:16

This happens on Attachment_fu uploads using S3 only in production
(Attachment_fu uploads to S3 in development mode work fine) All my
other logs (production.log, apache logs, etc) are clean and I haven't
been able to track down the source of the problem. Everything is
properly validated before the upload is called, and the records are
created in the database, but the image is never uploaded to S3 and I'm
getting the lockup shown above, which requires me to restart my
mongrels. I've been stuck on this for a good 2 weeks and haven't been
able to find any working solutions anywhere. I would greatly
appreciate any advice.

Code follows...


  def create
      @foo =[:foo])
      respond_to do |format|
          format.html { redirect_to foo_url(@foo) }
          format.html { render :action => "new" }
      render :action => "new"


after_create :save_logo

I have seen similar errors using aws:s3 without attachment_fu (though I may not be on the latest aws:s3 version) .
I gave up trying to solve/prevent it - so intermittant I begin to think the s3 connection may just flake sometimes, so I added retry logic instead. Since my uploads occur asynchronously, that works for me.

-Andrew Kuklewicz

I've mentioned it to Marcel. It's definitely some intermittent bug,
probably at a lower level than AWS (his unit tests pass just fine).

However, doing a lot of s3 stuff with attachment_fu probably isn't the
best thing either. It sure is convenient, but you're tying up
precious rails processes uploading data to Amazon.

FYI - Ive been doing some further testing, and after adding
:persistent=>false to Base.establish_connection! (in s3_backend.rb) we
have not seen any errors today (tested with images up to 5MB). Im not
sure if this change is making the difference, or if S3 is less flaky
today. Has anyone else received EPIPE or EOFError errors with

I just asked Marcel about it, he said folks still reported errors
after trying that.

I'm quite sure the AWS/S3 gem doesn't use mysql. From the stack
traces I've seen it seems to originate in the ruby standard net/http

hi jamie,
exact same error on production with attachment_fu and s3.

seems to occur after the it's been idle about 4 hours.

mixplate wrote:

i think we have similiar problem but im on fcgi/lighttd.

this is what i get:

EOFError (end of file reached):
   /usr/local/lib/ruby/1.8/net/protocol.rb:133:in `sysread'
   /usr/local/lib/ruby/1.8/net/protocol.rb:133:in `rbuf_fill'
   /usr/local/lib/ruby/1.8/timeout.rb:56:in `timeout'
   /usr/local/lib/ruby/1.8/timeout.rb:76:in `timeout'
   /usr/local/lib/ruby/1.8/net/protocol.rb:132:in `rbuf_fill'
   /usr/local/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
   /usr/local/lib/ruby/1.8/net/protocol.rb:126:in `readline'
   /usr/local/lib/ruby/1.8/net/http.rb:2017:in `read_status_line'
   /usr/local/lib/ruby/1.8/net/http.rb:2006:in `read_new'

I got the same Error. A small patch in AWS::S3 fixed it.