AR3 Selects & Read Only :-(

Short version:

Is there any way to override the ReadOnly behavior of AR3 when a SELECT
clause is specified?

I suspect I know why that change was made, but I have a Rails conversion
/ legacy data situation where I really need those returned models to be
updatable.

Long version:

A collection of related apps sharing 4 databases, over 250 tables, with
some of those tables having over 200 fields. It was designed to suit dbm
philosophies not OO/ORM philosophies. From a Rails perspective they
should be broken down into smaller tables and a boatload of has_one
associations defined. But we can't do that.

While I am converting this one app to Rails, others needing to use the
DB will not be converted just yet. So, I have to leave the schema 99.9%
alone until after all apps are Rails, then we can refactor the schema.

I need a plan for evolving the models and schema over time. I'm hoping
to find ways to define small models that somehow use only a prtion of
those large tables. ActiveRecord probably won't be very good at that
given the way it reflects on the table schema. So, having SELECTS not be
read only is critical.

I know I could use AR2 syntax, but I'd rather not if I don't have to. If
I do, then fine.

I'm looking at DataMapper too as it might allow me to have many classes
which define the use of only a subset of the fields from the large
tables.

Whichever of the two ORMs allows me the most efficient plan to evolve
the schema wins.

-- gw

Short version:

Is there any way to override the ReadOnly behavior of AR3 when a SELECT
clause is specified?

.readonly(false) ?

Fred

Frederick Cheung wrote:

Short version:

Is there any way to override the ReadOnly behavior of AR3 when a SELECT
clause is specified?

.readonly(false) ?

Dang it. How'd I miss that?

Oi. Thanks again Fred, you seem to always be the one catching my goofy
questions.

I will say though, these sections of the Guides don't mention that
(although I suppose I should have inferred that if true was an option,
false was an option too).

http://guides.rubyonrails.org/active_record_querying.html#selecting-specific-fields
http://guides.rubyonrails.org/active_record_querying.html#readonly-objects

OK, so maybe we can just be explicit with every query and set
readonly(false) or override .find somehow to always limit the SELECT
clause until the schema can be refactored...(fades off to go
experiment)...

-- gw

Greg Willits wrote:

Short version:

Is there any way to override the ReadOnly behavior of AR3 when a SELECT
clause is specified?

I suspect I know why that change was made, but I have a Rails conversion
/ legacy data situation where I really need those returned models to be
updatable.

Long version:

A collection of related apps sharing 4 databases, over 250 tables, with
some of those tables having over 200 fields. It was designed to suit dbm
philosophies not OO/ORM philosophies. From a Rails perspective they
should be broken down into smaller tables and a boatload of has_one
associations defined.

Why? There's nothing remotely non-OO about having 200 fields if that's
what's needed to define the object.

But we can't do that.

While I am converting this one app to Rails, others needing to use the
DB will not be converted just yet. So, I have to leave the schema 99.9%
alone until after all apps are Rails, then we can refactor the schema.

Define some database views (in the SQL sense, not the Rails MVC sense)
to expose a nicer interface to the Rails app. You can use the
rails_sql_views plugin to help with the migrations.

I need a plan for evolving the models and schema over time. I'm hoping
to find ways to define small models that somehow use only a prtion of
those large tables. ActiveRecord probably won't be very good at that
given the way it reflects on the table schema.

Why can't you just not touch the fields you don't want to touch?

The solution here is in your DB, not in your ORM, I think.

Best,

A collection of related apps sharing 4 databases, over 250 tables, with
some of those tables having over 200 fields. It was designed to suit dbm
philosophies not OO/ORM philosophies. From a Rails perspective they
should be broken down into smaller tables and a boatload of has_one
associations defined.

Why? There's nothing remotely non-OO about having 200 fields if that's
what's needed to define the object.

While that may be technically true, in this case IMO, there are too many
topics fused into one table just because they don't need to be separated
according to DB normalization. IMO, normalization is a good reason to
split things, but the invesre is not always true (no need for
normalization is not a good reason to keep things together).

A car could be defined as just one object with 1000 fields, but it's
probably not a good idea.

Composition is a friend here IMO.

-- gw

Greg Willits wrote:

A collection of related apps sharing 4 databases, over 250 tables, with
some of those tables having over 200 fields. It was designed to suit dbm
philosophies not OO/ORM philosophies. From a Rails perspective they
should be broken down into smaller tables and a boatload of has_one
associations defined.

Why? There's nothing remotely non-OO about having 200 fields if that's
what's needed to define the object.

While that may be technically true, in this case IMO, there are too many
topics fused into one table just because they don't need to be separated
according to DB normalization. IMO, normalization is a good reason to
split things, but the invesre is not always true (no need for
normalization is not a good reason to keep things together).

But there's no reason to split stuff apart solely to avoid wide tables.

A car could be defined as just one object with 1000 fields, but it's
probably not a good idea.

Why not? I see this fear of wide tables/lots of ivars fairly often, and
I'm not convinced that it's really justified. If the data is properly
normalized, then it's also probably in decent object form.

(Of course, usually a decomposition will emerge by the time you have
that many fields. But not always.)

Composition is a friend here IMO.

If there's a reasonable decomposition, yes -- say, if there are
subsystems in the car, or if several of the fields are duplicated in
another table. But there's no advantage to composition just for
composition's sake.

-- gw

Best,

Marnen Laibow-Koser wrote:

Greg Willits wrote:

A collection of related apps sharing 4 databases, over 250 tables, with
some of those tables having over 200 fields. It was designed to suit dbm
philosophies not OO/ORM philosophies. From a Rails perspective they
should be broken down into smaller tables and a boatload of has_one
associations defined.

Why? There's nothing remotely non-OO about having 200 fields if that's
what's needed to define the object.

While that may be technically true, in this case IMO, there are too many
topics fused into one table just because they don't need to be separated
according to DB normalization. IMO, normalization is a good reason to
split things, but the invesre is not always true (no need for
normalization is not a good reason to keep things together).

But there's no reason to split stuff apart solely to avoid wide tables.

That's not my purpose.

If it just has to be (lots of fields) then it just has to be, but I
would expect that to be rare, and it is not necessary in my current
case. The table contains easily distinguishable topics, and the code
would be better organized, more readable, more orthogonal, simpler to
test, and plain simpler to maintain in smaller pieces along the lines of
the topics I see in this table.

A car could be defined as just one object with 1000 fields, but it's
probably not a good idea.

Why not? I see this fear of wide tables/lots of ivars fairly often, and
I'm not convinced that it's really justified. If the data is properly
normalized, then it's also probably in decent object form.

Because the magnitude of such a beast with all that data, and the
validations that go with it, and all the business rules that go with it
probably ends up creating a monster of pig to maintain (obviously no
absolutes being preached).

It's not a fear of wide tables. It a simple recognition that more
smaller pieces usually yields better code than one big piece -- IF that
table can be broken up into logical orthogonal pieces. If it really is
200 fields of raw data with no apparant topical divisions, well, fine
that sounds like breaking it up just to break it up. That's not the case
in my data.

-- gw

Greg Willits wrote:
[...]

If it just has to be (lots of fields) then it just has to be, but I
would expect that to be rare, and it is not necessary in my current
case. The table contains easily distinguishable topics, and the code
would be better organized, more readable, more orthogonal, simpler to
test, and plain simpler to maintain in smaller pieces along the lines of
the topics I see in this table.

[...]

Then yeah, breaking it down probably makes sense.

Best,