Multiple customers - keeping the data separate - how?

I’m trying to get a handle on the different ways of maintaining data separation in Rails. From what I’ve read it looks like usually the security is handled entirely as an aspect within the Model.

I constantly find it amusing that whenever a ‘new’ way of doing

applications is created, they always ignore the security systems that have gone before. First we had operating system security with its user and group database. Then we have databases with their own security model. Now we have web apps reinventing that particular wheel once again sitting in a single operating system user space and logging onto the database with a single all powerful user.

Unfortunately the application I have in mind involves account data, and

I can’t afford a bug in an application exposing one customer’s data to another. I need something more substantial than that. (And there are other reasons - such as backup). However I still want to share physical

infrastructure.

My thoughts are that there should be a URL per customer driving their own mongrels locked onto their own version of the database. However the standard infrastructure support tools don’t support that way of doing

things.

Are there any other thoughts about how the security separation should be enforced?

Rgs

NeilW

Neil,

I found this because I'm looking at moving a .NET application to Rails and was looking for suggested ways to accomplish the same thing. I was curious because it seemed to me as though once you had this kind of requirement that a lot of the magic SQL methods became difficult to use, but since 37 signals' apps are all this same vein, perhaps there was a mor elegant way. Rails seems absolutely genius for building a single-client app but when you add on multi-tenancy and all the entitlement issues, it starts feeling more like an incremental improvement, but perhaps I'm missing the right tricks.

The .NET app I was referring to does use (loosely speaking) model-based security, as do (I'd guess) very nearly all multi-tenancy applications. Certainly the Bank of America does not create individual databse users/instances for each online banking customer. I don't know how Salesforce.com does it, because they have a huge number of clients on a very complicated application, but their infrastructure is probably somewhat exotic and quite certainly expensive.

To me the idea of managing account security through the database is dreadful. My experience is that you will start out with an entitlement model that is either too complicated, or too simplistic; I've yet to see one that was right the first time. In either case security and entitlement will likely end up as part of your application layer whether you want it to or not, at least in the sense that you are going to be turning UI elements on and off based on roles. Also, separate database instances is in my humble opinion a big mistake unless there is a really great reason. It is far more advantageous to maintain one database rather than one DB per client.

The approach you seem to want is industrial-strength, to put it lightly. Is this because clients want it that way, or you do? Clients will always ask for more security if you offer it without asking for more money or taking away features. But, I've found that clients don't really care how security is accomplished so long as you tell them, "it is taken care of." Perhaps the domian you're in is different, but I've done several applications like this over the past 8 years and have never needed to go the route you're speaking of.

It would be nice, if someone can tell the architecture model behind basecamp. Are basecamp instances separate for each customer ? That is everyone has their database and separate rails directory etc.?

My very very naive solution is this 1. Clone from a rails project to a directory 2. New DB for the new client. (Can be from a default SQL script which sets up the table etc. So no need for migrations ) 3. Point the clones rails app to new DB 4. subdomain points to the new rails app.

But this becomes a maintenance headache when you have to update the code. You have to update the cloned apps.

So is there a better way to do it ? Can one rails app handle multiple sites via subdomains, separate db for each site ?

I'd bet that this is not the case. I really don't see the need of having separate databases and I'm sure most if not all of the multi user web apps wouldn't do such a thing.

If you build good model level security, that is making sure that every query is constrained to the context of a particular user then every additional functionality you build on top will be locked down to the access provided in your architecture. The key is to make sure you get that right before you start adding complexity.

The point is that making sure that every query is constrained *is* adding complexity. I might get it wrong. My tests might not be complete. There might be a bug,

If you have a separate database (as in 'CREATE DATABASE', tied in with its own database.yml per tenant - not separate mysql/pgsql processes), then I can build my model as a single tenant system. Problem solved (although managing it is another matter).

Plus backup per customer are dead easy - just checkpoint and dump the relevant database.

I'm trying to understand why solving the problem again is better than what is there already.

Ross Riley wrote:

As you say managing it is another matter. Whilst you say that this method is solving the problem again, I would say that it's probably a different problem with its own solution.

The separate user system for databases isn't really designed for storing multiple versions of the same application data, surely you are multiplying complexity by the number of users you cater for, imaging rolling out migrations to hundreds/thousands of databases, scaling the same databases over multiple servers.

I'm sure there's just as much a risk of a bug creeping into this way of working (what if you read the wrong yml file? or made a mistake in the backup manager script?) The benefit of a single database is that you only have to worry about the application security at a single point of entry, triple the amount of time you spend writing tests and verifying this point and I'm sure you'll save much more in the efficiency of a simple scalable system.

Neil Wilson wrote:

The point is that making sure that every query is constrained *is* adding complexity. I might get it wrong. My tests might not be complete. There might be a bug,

In theory, yes. In practice I have never found this to be a problem if your users/groups/accounts model is remotely sane.

If you have a separate database (as in 'CREATE DATABASE', tied in with its own database.yml per tenant - not separate mysql/pgsql processes), then I can build my model as a single tenant system. Problem solved (although managing it is another matter).

You will spend ten times as much maintaining the app as you will building it. Building a system that's easier to develop but more costly to maintain is not the course I would take.

Plus backup per customer are dead easy - just checkpoint and dump the relevant database.

OK, you got me there. I'd still rather take the time to develop the DB dumper than to maintain multiple databases.

I'm trying to understand why solving the problem again is better than what is there already.

You're speaking as though Rails' approach to application-level security is "new" but it's not. It's been around as long as mutli-user applications have which means probably older than UNIX. These are different approaches that serve different purposes.

Your suggested approach is not ludicrous. SAP has taken this tack with their on-demand CRM offering. Each client gets their own database instance, but the application code is booted off a central repository, and the instance runs in its own sandbox to guarantee the availability fo RAM, CPU, etc. Time will tell how well this works, but "low maintenance cost" and "SAP" have rarely been uttered in the same sentence. I think they did it primarily to differentiate from Salesforce, and they will eventually end up with a classic ASP-style system with each client running their own fragmented schema and codebase.

Still, virtually every multi-tenant application out there is built the way Rails works by default. If you feel more comfortable implementing this through the database, then knock yourself out.

Neil Wilson wrote: > The point is that making sure that every query is constrained *is* > adding complexity. I might get it wrong. My tests might not be > complete. There might be a bug,

In theory, yes. In practice I have never found this to be a problem if your users/groups/accounts model is remotely sane.

I don't think I'd go that far. Assume that your application's usage grows unbounded (like an Internet site), you WILL eventually lose to big numbers. Now, assuming you partition your data along some sort of meaningful, but arbitrary metric, like all the data that belongs to a particular customer, you may STILL lose to big numbers, but what I've found is that the partitioned data grows far more slowly than the aggregate, greatly reducing the chance that you will have to confront this problem at all.

> If you have a separate database (as in 'CREATE DATABASE', tied in > with its own database.yml per tenant - not separate mysql/pgsql > processes), then I can build my model as a single tenant system. > Problem solved (although managing it is another matter).

You will spend ten times as much maintaining the app as you will building it. Building a system that's easier to develop but more costly to maintain is not the course I would take.

There is another way to do this: make the database connections dynamic based on URL or some such. So, you wind up with a single DB connection in your database.yml file that points to a DB that contains connection information to the DB used by the client. Assuming that you will eventually need to update software/schemas you can store a schema/app version in this master DB lookup table as well. The assumption here is that for a short period of time you will have two versions of software/schema deployed as you roll out an update. This would go a long way to reduce the pain of managing such a system. Now, as far as convincing AR to work this way... I'm not sure, I am investigating the possibility as well, and I think it can be done. It may need some overrides (some of the 'find' methods in AR::Base may need to be overridden to take a connection object).

> Plus backup per customer are dead easy - just checkpoint and dump > the relevant database.

OK, you got me there. I'd still rather take the time to develop the DB dumper than to maintain multiple databases.

There are other significant benefits to this approach:

- Incremental application migration - Overall better performance - The ability to manage performance better (one big hot client can be moved to their own Db server)

The issue is how to make an approach like this work within the Rails framework.

The DB dumper is something that has to be maintained! You're not getting out of the fact that you will have to do work to make it seem like each tenant is an island. Furthermore, what about a DB loader? The separate DB approach means that you get dump/load for free, then your only costs are related to how you manage getting a tenant to their data, and migration issues (not trivial, I know).

> I'm trying to understand why solving the problem again is better > than what is there already.

You're speaking as though Rails' approach to application-level security is "new" but it's not. It's been around as long as mutli-user applications have which means probably older than UNIX. These are different approaches that serve different purposes.

Your suggested approach is not ludicrous. SAP has taken this tack with their on-demand CRM offering. Each client gets their own database instance, but the application code is booted off a central repository, and the instance runs in its own sandbox to guarantee the availability fo RAM, CPU, etc. Time will tell how well this works, but "low maintenance cost" and "SAP" have rarely been uttered in the same sentence. I think they did it primarily to differentiate from Salesforce, and they will eventually end up with a classic ASP-style system with each client running their own fragmented schema and codebase.

Still, virtually every multi-tenant application out there is built the way Rails works by default. If you feel more comfortable implementing this through the database, then knock yourself out.

Generally agreed. But the "all tenants in one DB" causes significant problems. The fact that it is essentially impossible to "isolate" the performance and storage between tenants at run time generally makes Rails less, how shall I say this? "agile". Although I don't think it is too hard to rectify this.

Jim Powers

The obvious way to do it is of course to split it at the domain level.

tenant1.mydomain.com CNAME server1 tenant2.mydomain.com CNAME server2

One machine - one tenant.

However that's still expensive, even with on demand machines like the Amazon EC2, (and you end up with a lot of machines to watch). It would be nice to be able to fold that onto less machines, yet keep the 'share nothing' approach.

The payoff in making the architecture work in infrastructure is an infinitely simpler application - less code (as long as that code is truly gone and doesn't just pop up again in your management systems). You can write your application as a simple one company setup - which could then, in theory, be wrapped up in Instant Rails and sold as a desktop application.

One of the things I like about Rails is its disdain for threading, and its share nothing architecture. It keeps the code clean and understandable. Moving the multi-tenancy issue to infrastructure seems to me to be in the same vein.

NeilW

Are you suggesting that application code resides on each machine separately.? Then this situation is the same as the one i described before. But how do you manage code updates and database updates ? Every database and code that was split will have to be updated.

From what has been discussed so far. Pros

Jim Powers wrote:

> Neil Wilson wrote: > > The point is that making sure that every query is constrained *is* > > adding complexity. I might get it wrong. My tests might not be > > complete. There might be a bug, > > In theory, yes. In practice I have never found this to be a problem if > your users/groups/accounts model is remotely sane.

I don't think I'd go that far. Assume that your application's usage grows unbounded (like an Internet site), you WILL eventually lose to big numbers.

Well in my current case, all of my data is partitioned--nothing Client A sees will ever be seen by Client B. I still think one DB all clients is much easier.

As for losing to big numbers, I'm not sure exactly what you mean. In any case, it all depends on your business. At small numbers the key is to survive long enough to acquire customers. At medium numbers it's to start making money. At large numbers things start to behave very differently as off-the-shelf solutions kind of cease to work, period.

> > If you have a separate database (as in 'CREATE DATABASE', tied in > > with its own database.yml per tenant - not separate mysql/pgsql > > processes), then I can build my model as a single tenant system. > > Problem solved (although managing it is another matter). > > You will spend ten times as much maintaining the app as you will > building it. Building a system that's easier to develop but more > costly to maintain is not the course I would take.

There is another way to do this: make the database connections dynamic based on URL or some such. So, you wind up with a single DB connection in your database.yml file that points to a DB that contains connection information to the DB used by the client. Assuming that you will eventually need to update software/schemas you can store a schema/app version in this master DB lookup table as well. The assumption here is that for a short period of time you will have two versions of software/schema deployed as you roll out an update. This would go a long way to reduce the pain of managing such a system. Now, as far as convincing AR to work this way... I'm not sure, I am investigating the possibility as well, and I think it can be done. It may need some overrides (some of the 'find' methods in AR::Base may need to be overridden to take a connection object).

This is a !@#$-load of tricky plumbing to avoid setting and watching client entitlements on database rows. Let alone the havoc this could wreak with managing the database--depending on which one you use, this could complicate how you deal with tablespaces and such.

There are other significant benefits to this approach:

- Incremental application migration

This is beneficial if you want to maintain multiple software versions. If that's the case then you might as well just install complete app instances per client and be done with it. Been there many times, will never do it again unless building something like an ERP where it still makes some sense.

- Overall better performance

TANSTAAFL. If your database server is running twenty database instances, there is going to be some kind of performance hit to that versus one DB with tables 20 times larger. The overhead associated with connection pools and query caches et. al. could in many cases be much larger than the hit to scanning tables 20 times longer. I just don't accept this as an open-shut benefit right off the bat.

- The ability to manage performance better (one big hot client can be moved to their own Db server)

There's no reason you can't do this with a multi-tenant system too. For that matter you can run a special client on their own complete system instance with no or very little fancy plumbing.

The DB dumper is something that has to be maintained! You're not getting out of the fact that you will have to do work to make it seem like each tenant is an island.

Snap response: Implement some kind of to_sql method which can be called recursively through the object tree, starting with the root object representing a client. For all I know facilities for this already exist within ActiveRecord which after all has to know how to generate SQL. Or just serialize stuff into yaml, or something like that.

Not to mention that you may find (as I did) that clients want/like human-readable backups, not SQL dumps.

Furthermore, what about a DB loader?

Read the infile, marshal it into your Model objects, then call the appropriate new/create/save methods. Now you get all your application validation goodies for free and there's no chance to create relationships out of whack with anything else.

The separate DB approach means that you get dump/load for free, then your only costs are related to how you manage getting a tenant to their data, and migration issues (not trivial, I know).

I still think the "not trivial" aspect understates it by two-thirds. It seems to me like you're building a unique configuration that will end up having a lot more dependencies on the versions of the framework, O/S, database config, etc. than is obvious from this vantage point. The end result could be that every time you do a major rev of any piece, you risk the whole thing falling apart and being the one guy in the world with that specific problem, and needing to stay on MySQL 3.1 for a year aftr its release until the low-priroity bug gets fixed. Yeah, I know it's a hypothetical, but it's the kind of hypothetical that's bitten me in the rear multiple times. The all-in-one approach has been by far the easiest to maintain and operate of all the approaches I've been involved with.

Generally agreed. But the "all tenants in one DB" causes significant problems. The fact that it is essentially impossible to "isolate" the performance and storage between tenants at run time generally makes Rails less, how shall I say this? "agile". Although I don't think it is too hard to rectify this.

Well like I said above I agree that it poses certain challenges--you end up needing to build a high-performance application even though all your customers are 5-seat installations. I do agree that this is ultimately probably an issue best solved in the database, but I'm not sure that the approach posited here isn't trading getting stabbed for getting shot.

Jim Powers wrote:

> Neil Wilson wrote: > > The point is that making sure that every query is constrained *is* > > adding complexity. I might get it wrong. My tests might not be > > complete. There might be a bug, > > In theory, yes. In practice I have never found this to be a problem if > your users/groups/accounts model is remotely sane.

I don't think I'd go that far. Assume that your application's usage grows unbounded (like an Internet site), you WILL eventually lose to big numbers.

Well in my current case, all of my data is partitioned--nothing Client A sees will ever be seen by Client B. I still think one DB all clients is much easier.

As for losing to big numbers, I'm not sure exactly what you mean. In any case, it all depends on your business. At small numbers the key is to survive long enough to acquire customers. At medium numbers it's to start making money. At large numbers things start to behave very differently as off-the-shelf solutions kind of cease to work, period.

> > If you have a separate database (as in 'CREATE DATABASE', tied in > > with its own database.yml per tenant - not separate mysql/pgsql > > processes), then I can build my model as a single tenant system. > > Problem solved (although managing it is another matter). > > You will spend ten times as much maintaining the app as you will > building it. Building a system that's easier to develop but more > costly to maintain is not the course I would take.

There is another way to do this: make the database connections dynamic based on URL or some such. So, you wind up with a single DB connection in your database.yml file that points to a DB that contains connection information to the DB used by the client. Assuming that you will eventually need to update software/schemas you can store a schema/app version in this master DB lookup table as well. The assumption here is that for a short period of time you will have two versions of software/schema deployed as you roll out an update. This would go a long way to reduce the pain of managing such a system. Now, as far as convincing AR to work this way... I'm not sure, I am investigating the possibility as well, and I think it can be done. It may need some overrides (some of the 'find' methods in AR::Base may need to be overridden to take a connection object).

This is a !@#$-load of tricky plumbing to avoid setting and watching client entitlements on database rows. Let alone the havoc this could wreak with managing the database--depending on which one you use, this could complicate how you deal with tablespaces and such.

There are other significant benefits to this approach:

- Incremental application migration

This is beneficial if you want to maintain multiple software versions. If that's the case then you might as well just install complete app instances per client and be done with it. Been there many times, will never do it again unless building something like an ERP where it still makes some sense.

- Overall better performance

TANSTAAFL. If your database server is running twenty database instances, there is going to be some kind of performance hit to that versus one DB with tables 20 times larger. The overhead associated with connection pools and query caches et. al. could in many cases be much larger than the hit to scanning tables 20 times longer. I just don't accept this as an open-shut benefit right off the bat.

- The ability to manage performance better (one big hot client can be moved to their own Db server)

There's no reason you can't do this with a multi-tenant system too. For that matter you can run a special client on their own complete system instance with no or very little fancy plumbing.

The DB dumper is something that has to be maintained! You're not getting out of the fact that you will have to do work to make it seem like each tenant is an island.

Snap response: Implement some kind of to_sql method which can be called recursively through the object tree, starting with the root object representing a client. For all I know facilities for this already exist within ActiveRecord which after all has to know how to generate SQL. Or just serialize stuff into yaml, or something like that.

Not to mention that you may find (as I did) that clients want/like human-readable backups, not SQL dumps.

Furthermore, what about a DB loader?

Read the infile, marshal it into your Model objects, then call the appropriate new/create/save methods. Now you get all your application validation goodies for free and there's no chance to create relationships out of whack with anything else.

The separate DB approach means that you get dump/load for free, then your only costs are related to how you manage getting a tenant to their data, and migration issues (not trivial, I know).

I still think the "not trivial" aspect understates it by two-thirds. It seems to me like you're building a unique configuration that will end up having a lot more dependencies on the versions of the framework, O/S, database config, etc. than is obvious from this vantage point. The end result could be that every time you do a major rev of any piece, you risk the whole thing falling apart and being the one guy in the world with that specific problem, and needing to stay on MySQL 3.1 for a year aftr its release until the low-priroity bug gets fixed. Yeah, I know it's a hypothetical, but it's the kind of hypothetical that's bitten me in the rear multiple times. The all-in-one approach has been by far the easiest to maintain and operate of all the approaches I've been involved with.

Generally agreed. But the "all tenants in one DB" causes significant problems. The fact that it is essentially impossible to "isolate" the performance and storage between tenants at run time generally makes Rails less, how shall I say this? "agile". Although I don't think it is too hard to rectify this.

Well like I said above I agree that it poses certain challenges--you end up needing to build a high-performance application even though all your customers are 5-seat installations. I do agree that this is ultimately probably an issue best solved in the database, but I'm not sure that the approach posited here isn't trading getting stabbed for getting shot.

Manu J wrote:

Are you suggesting that application code resides on each machine separately.? Then this situation is the same as the one i described before. But how do you manage code updates and database updates ? Every database and code that was split will have to be updated.

Capistrano does this already. It deploys to everything - DB servers, app servers, web servers. What you need to do is reduce the number of variables that have to be tweaked for each tenant. I see no reason for separate tenant application code, or even separate branches on the version control system.

From what has been discussed so far. Pros ------- 1. Easy backup 2. One client cannot take down everyone (Also easy to isolate problems ) 3. Better performance 4. Scale Better

Separates the tenant management from the application code, allowing the application code to be much simpler. I reckon the venerable 'depot' application from The Book could be made multi-tenanted with this approach.

Cons -------- 1. Rolling out updates is more difficult. ( As Jim said incremental migration can also be considered an advantage ).

Capistrano will do what you want I feel.

2. Will involve hacking AR

I don't see that as being necessary. I reckon all the tenant management can be handled by the DNS or the load balancer, the file system and differing copies of the database.yml file.

I think the fundamental point at which further division becomes difficult is if you try and force more than one tenant into a single running Mongrel instance. I reckon if you stick to the fundamental principle that One Mongrel = One Tenant and make sure each tenant ends up at the right Mongrel it'll work.

I'm extremely encouraged by this discussion. Thanks to everybody taking part.

NeilW

CWK wrote:

This is a !@#$-load of tricky plumbing to avoid setting and watching client entitlements on database rows. Let alone the havoc this could wreak with managing the database--depending on which one you use, this could complicate how you deal with tablespaces and such.

That tricky plumbing is handled by multiple copies of the database.yml file.

Why not One Tenant = One Database Login = One Database = One database.yml file? All running under Mongrel instances unique to that Tenant.

I still think the "not trivial" aspect understates it by two-thirds. It seems to me like you're building a unique configuration that will end up having a lot more dependencies on the versions of the framework,

No. Very definitely no. The approach has to be the McDonalds approach. Every tenant has to be a replica of the standard model except for the smallest possible changeset.

I'm not proposing code changes between the tenants. Everything is a simple checkout from the main branch of the version control system.

The only things that are unique per tenant are the database.yml file, and the way the load balancing system works out how to send a particular tenant to the right Mongrel instance running that database.yml file.

I'm convinced this can be done in a simple and effective manner.

NeilW

You're going to have gigantic memory usage. Rails instances are very large, on the order of 28M per mongrel instance *mimimum*, and you need 2 minimum per application, more if traffic becomes significant.

I understand the innate natural scalabillity of what you're proposing, but I think you may be missing another poster's very apropos suggestion that if you design the application as multi-tenant, there's nothing preventing you from operating certain instances of it single-tenant.

Since your customers will no doubt fit into some sort of 80/20 rule with respect to usage patterns and resource requirements, why not host all new customers in a single instance, and move out the 20% that require extra juice into separate instances?

Another aspect that I'm curious about is what sort of infrastructure do you plan to deploy on? If you're talking about hosting a single instance on a single box, that reliability of a single instance will be poor, and the reliability of your system once you grow beyond a few boxes will be terrible! There will literally be something wrong every day once you have a few hundred boxes, and therefore a few hundred customers.

With multiple tenancy you can spend money on system side scalability and reliability measures and performance and reliability will improve for all of your customers.

There's no question, however, that the "one big system" approach will eventually prove limiting, but at that point, you'll have a blueprint for how to design a scalable and reliable platform that can host many customers' instances, and you can replicate those entire multi-customer systems as you grow further.

Neil Wilson wrote:

Unfortunately the application I have in mind involves account data, and I can't afford a bug in an application exposing one customer's data to another.

So you have wall-to-wall unit tests, right?

I may be way off track here, butt this seems to be pretty simple to do.

1. Maintain a master DB which contains data common to your app. 2. All models which have client-specific information overloads find to take another parameter: client, something along the lines of: def find(client, *args) #Establish new ActiveRecord connection here. and push off to super end 3. When you create a new client, run a script to create a new database using a convention: a sanitized client name. Use the same convention above. 4. Associations should still work since in the end everything uses find (might be wrong here).

Vish

Tom Mornini wrote:

You're going to have gigantic memory usage. Rails instances are very large, on the order of 28M per mongrel instance *mimimum*, and you need 2 minimum per application, more if traffic becomes significant.

Memory is cheap, and there is still swap space at a push - depending upon usage patterns.

I understand the innate natural scalabillity of what you're proposing, but I think you may be missing another poster's very apropos suggestion that if you design the application as multi-tenant, there's nothing preventing you from operating certain instances of it single-tenant.

I'm trying to reduce the complexity of the application model and trade that off against a different archtecture in the infrastructure. I'm trying to see if the trade-offs work or not.

Since your customers will no doubt fit into some sort of 80/20 rule with respect to usage patterns and resource requirements, why not host all new customers in a single instance, and move out the 20% that require extra juice into separate instances?

It's not a juice issue. It is entirely data separation and whereabouts in the layers that separation should be enforced, and whether tenanting can be separated out and solved as an infrastructure problem.

There will literally be something wrong every day once you have a few hundred boxes, and therefore a few hundred customers.

That's overly negative. I don't find simple computer systems set up well that unreliable. Good ones run for years and years without problems.

Plus an application can be just as unreliable in code - particularly if it is overly complex.

With multiple tenancy you can spend money on system side scalability and reliability measures and performance and reliability will improve for all of your customers.

Actually I've always found that when you start messing around with complicated hardware structures and start pushing for that next 9 on your reliability percentage, things start to go pear-shaped. At that point, not only do you have a potentially complex and fault prone application, you also have a potentially complex and fault prone architecture that is very, very difficult to change or improve.

I'd rather stop at 'reliable enough ' and replicate thereon using the simplest techniques possible. If you have a problem it affects a very small percentage of users. And there is nothing to say that you will have that many problems. Simplicity pays large dividends.

NeilW

Neil Wilson wrote:

CWK wrote:

> This is a !@#$-load of tricky plumbing to avoid setting and watching > client entitlements on database rows. Let alone the havoc this could > wreak with managing the database--depending on which one you use, this > could complicate how you deal with tablespaces and such.

That tricky plumbing is handled by multiple copies of the database.yml file.

Why not One Tenant = One Database Login = One Database = One database.yml file? All running under Mongrel instances unique to that Tenant.

Because every time you want to do an update you will need to replicate it perfectly across every instance. If the push fails part of the way through for some reason, you end up in an uncertain state which could leave one or more clients SOL. With multi-tenancy it is a lot easier to set up a staging environment which closely resembles production so you can test updates closely before release.

I'm nto nearly so worried about new installs as I am maintaining your existing clients. I've been burned too many times over the years with weird update issues to take anything for granted. The idea of maintaining one big instance is much more palatable.

I'm convinced this can be done in a simple and effective manner.

I'm convinced you can have one of the two. In any case, if you do go this route, I hope you'll let us know how it works out. I'm genuinely curious.

Phlip wrote:

So you have wall-to-wall unit tests, right?

No, and neither do you. Even if you think you have :wink:

NeilW