So with 200 users, a lot of this stuff just doesn't matter (e.g. S3 served static assets versus static assets served straight from disk). The choices you've made sound sensible though. Stuff like switching from passenger to nginx + unicorn isn't particularly hard.
I have found airbrake to be a little flaky of late - we stopped getting exception notifications and it took 4-5 days of pestering their support guys to get it fixed. I've heard good things about bugsnag although I haven't got around to leaving airbrake yet.
You may wish to consider your disaster recovery plans - if your VPS should fail how would you replace it. I assume you have backups of the data (or better a slave continually replicating the master database) but server stuff is important too: the last thing you want to be doing after such an incident is spending half a day reinstalling/ reconfiguring apache, rails etc. I would highly recommend automating how you build server instances. Chef, puppet, sprinkle, homegrown - to me it doesn't matter so much as long as you can bring up new instances easily. You may be in an environment where you can build images that servers boot off (e.g. EC2 allows you to make AMIs) in which case that is eventually a good idea too.
You will eventually want to split production from staging as that will probably eventually bite you, for example you can't do load testing on staging without affecting production. A badly written SQL query that you're trying out on staging could compromise performance on production. Stuff like testing a new version of mysql or ruby is harder too.
A lot of this can probably wait though.
Fred