Looking for Database experience - DB2 - MySQL- Postgres

"me too!"

but, more specifics:

around 6 years ago postgres got dinged for speed a lot, but i never heard anyone ding it for reliability. a lot of people liked mysql more because speed trumped reliability, and "nobody" used all those fancy postgres features (real joins, subqueries, etc.) anyway.

well, it's 6 years later, and from what i've seen postgres now outpaces mysql on all but the simplest tests. mysql is still a speed demon, and is starting to get better concurrency support, and i've heard claim that it actually does real transactions these days, but postgres is to the point that the spamassassin people recommend it as the db of choice for the bayes db.

high availability is another story. for one thing, how do you define high availability? but, postgres has various clustering options (but then, so does mysql), and failover options, and real transactions, and live backups, and etc.

i imagine oracle is still faster (not a surprise: oracle pays people to do nothing more than sit around making it faster), but if you're starting out free and don't want to have to pay out the nose when you start to scale up, you should probably be looking seriously at postgres.

-faisal

Can you guys describe how you would put together a high availability
solution in MySQL or Postgres. I would describe high availabilty as:

i can speak to postgres to some extent.

- Provides for redundancy either via clustering or replication

multi-master:
http://pgcluster.projects.postgresql.org/

master-slave:
http://gborg.postgresql.org/project/slony1/projdisplay.php

I haven't used either of these, so I don't know how solid they are.

- Has almost immediate failover

http://pgpool.projects.postgresql.org/

or, if you're rolling your own you might look at using heartbeat:
http://www.taygeta.com/ha-postgresql.html

that said, rsync is not the way to replicate data. don't do that.

- Does not take a lifetime to recover from a failure

other than the failover mechanisms described above, you probably want to run scheduled binary format backups, e.g.:
pg_dump --format=c --file=backup_file_name db_name

restore would then be something like
pg_restore -d db_name backup_file_name

- Has ACID transaction support & will fail a transaction, rather than
drop data

see http://www.ddj.com/dept/architect/184413354

iirc there are config options related to fs write integrity that let you sacrifice some level of reliability to buy some speed.

I have used postgres [on FreeBSD] for the last 5 years, have had only 1 crash, & have never lost any data.

i've used it on mac os x and Linux and it's been totally solid for me. one of the os x installs was on a laptop, and while the load was very low i was impressed by the extent to which sleeping and unsleeping a lot did not phase psql at all.

I once half-heartedly tried to implement slony but I was not confident that it was up to the same standard as postgres.

i have the same concerns.

-faisal