I'm writing a role-based security plugin for personal use in my apps.
The plugin will attach some helper methods to my models automatically
(when I run my new class method 'acts_as_a_foo'). Some of these
methods have names defined by data in one of my tables. I've got the
metaprogramming part working, but I've run into a timing problem that
reveals a flaw in my design.
It's most obvious when in the test environment: I need my model
classes declared before I can load fixtures, but the data from those
fixtures is needed before I can define my models because that's when I
generate all of my method names. I've convinced myself that the
timing problem exists in other environments even when fixtures aren't
a factor.
I think I'm going to go with plan B (store this data in a config file
and not in the DB), but I'd like to get advice if anyone else has done
something like this succesfully in the past. Has anyone used data
from the DB to drive the structure of models?
My only other idea was some sort of complex bootstrapping where I
declare all of my models with their associations, then I load
fixtures, then reopen all of the model classes and continue with their
implementation. I'm rejecting this as too complicated, especially for
a plugin responsible for security where I want to make testing as
simple as possible.
The other thought I had was that I could override missing_method and
delay adding the new methods until they are actually used for the
first time. This definitely avoids the timing errors, but seems a
little strange.
The problem you're having with testing is due to the fact that your
plugin is tightly coupled with the source of the data. If the plugin
itself didn't knew or could delegate the task of loading the data to
someone else (something like a RolesDataProvider) you could keep the
dev and production environments using the database as a data-source
and have the test env use a file to load this data (this could also
make your tests run faster).
When loading your application, you could just configure the default
provider to your plugin and generate the methods with all the data
available.
Also, avoid overriding method_missing, as this leads to many kinds of
troubles (the first one is that if you override method_missing you
would also have to override respond_to? to keep tne interface
consistent). And just eval'ing the methods is also faster.
... I've run into a timing problem that
reveals a flaw in my design.
Maybe.
It's most obvious when in the test environment: I need my model
classes declared before I can load fixtures, but the data from those
fixtures is needed before I can define my models because that's when I
generate all of my method names.
Models do not factor into the picture WRT fixture loading. Fixture data
does not 'pass thru' the model's validations.
The problem you're having with testing is due to the fact that your
plugin is tightly coupled with the source of the data. If the plugin
itself didn't knew or could delegate the task of loading the data to
someone else (something like a RolesDataProvider) you could keep the
dev and production environments using the database as a data-source
and have the test env use a file to load this data (this could also
make your tests run faster).
This is good insight. I'll give this some more thought. I'm not
concerned about testing time since it only takes a few seconds to run
all of my unit tests at the moment. But clearly if I put together a
data provider with minimal dependencies I can grab what I need from
any source before my models are defined. This shouldn't be
significantly more work that my existing "plan B", and it has the
advantage that I can switch back to the DB as a source of this
information later on (or even use a combination of the two).
Also, avoid overriding method_missing, as this leads to many kinds of
troubles (the first one is that if you override method_missing you
would also have to override respond_to? to keep tne interface
consistent). And just eval'ing the methods is also faster.
Good to know. It just sounds like something that's easy to get
wrong. I won't go down this route for something this important to get
right.
I would not have expected that models factored in either, until last
night.
First of all, the foxy fixtures for HABTM definitely depend on having
models in order to get the SQL query right. Which makes sense to me
-- the table I'm using to test doesn't have 'something' or
'something_id' columns so as far as it can tell I'm providing extra
data that doesn't fit into the table. It's not until models and
associations are loaded that it becomes apparent what this information
is for.
I also ran into trouble with 'fixture :foos' which ensures that :table
has a corresponding class, Foo. I don't recall whether this had to be
a subclass of ActiveRecord::Base (or even whether I checked this), but
obviously this is normally a model.
Your point is well taken though -- I could be using other means to
seed my test data, or even just bypassing some of the convenience of
the existing fixture infrastructure. But my testing problems were
really just the canary; I didn't explain it well in my initial post,
but there really are problems with my design. Specifically, I had
planned to treat the tables with security data just like any other
table in my app, with a corresponding model, and protected with my
security plugin. When I focus on this one table the problem is clear:
after my model is defined, I need to use it to access data which is a
prerequisite to defining models (including itself). I was too
concerned with the big picture to see the obvious paradox. Luckily I
caught this right away because I couldn't test it.
Mauricio makes a good observation about being tightly coupled to my
source of data. If I abstract this a bit I have more flexibility and
I can still load my data from a DB (just at an earlier time and
without pushing it into a model), or I can go with some sort of
configuration (probably writable class attributes for some
configuration class that can be set in an initializer). Getting my
data from the DB wasn't bad -- trying to smash it into the
ActiveRecord architecture was my mistake.
This is the more typical Rails idiom - it's how ActiveRecord generates
all kinds of things, from dynamic finders (find_all_by_foo) to
attribute methods.