Unzipping a file on S3

I have a Zip file on S3 that I uploaded with Paperclip. It contains a collection of related resources (an HTML5-animated banner ad unit and all of its support files), and my client would like to be able to review that ad in place. I found a lot of example code on the Web (some that I had written, years ago) related to exploding a Zip file and exporting its component files individually. But all of these solutions are based on the idea that you do something like the following:

  # inside your uploader (Carrierwave) or processor (Paperclip)   create a tempfile   copy the uploaded file to it   create a Zip::ZipFile reference to the tempfile   iterate over its contents, looking for "real" files     read the data     create a new CW or PC instance and attach the data to it     save that new instance   end   delete the original Zip file

While that works just fine for "bursting" a Zip file into new equal-peer records, it doesn't preserve the internal structure of the files such that the HTML will still know where to find all its resources.

What I am looking for is the equivalent of this:

  cd path/to/zip.zip   unzip zip.zip

Except the files are on S3. I am on EC2 when processing this, if that makes any difference.

What I am doing right now is unzipping the file on the fly for each request and streaming the requested sub-file with send_data, which is horribly inefficient and will probably not survive production. Here's how that works, if you're curious:

https://gist.github.com/walterdavis/4cc538c03f6809447fc3

Can anyone suggest an approach, maybe using Fog directly, that I could use to explode the Zip while maintaining its internal structure?

Thanks,

Walter