Issues RUNNING mysql gem

Hi, all.

With unbelievable difficulty, I have the mysql gem installed without warnings with the following components:

OSX Leopard 10.6.2 ruby 1.8.7 (2008-08-11 patchlevel 72) [universal-darwin10.0] Rails 2.3.5 mysql (2.8.1) [gem version] MySQL Server 5.1.44

I even rebuilt my whole project a few minutes ago, thinking an unsuccessful previous install might have failed to generate my database. Yet when I run rake db:migrate, I get the following errors:

iMacMike:pfmpe mikemontagne$ rake db:migrate --trace (in /Users/mikemontagne/rproj/pfmpe) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! Unknown database 'pfmpe_development' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:589:in `real_connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:589:in `connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:203:in `initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in `new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in `mysql_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:in `checkout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `checkout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:in `checkout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in `connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:in `connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:435:in `initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in `new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in `up' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:383:in `migrate' /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:617:in `call' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:617:in `execute' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:612:in `each' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:612:in `execute' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:578:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:571:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:564:in `invoke' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2019:in `invoke_task' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1997:in `top_level' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1997:in `each' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1997:in `top_level' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2036:in `standard_exception_handling' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1991:in `top_level' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1970:in `run' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2036:in `standard_exception_handling' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1967:in `run' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19

You have to create your databases regardless of what DB you're using -- as I just pointed out in your previous thread.

Are you reading any of the responses?

Sorry for this. It's my first go-around with RoR/MySQL. Got it:

rake db:create RAILS_ENV='development'

Then:

rake db:migrate --trace

Hassan Schroeder wrote:

Are you reading any of the responses?

-- Hassan Schroeder ------------------------ hassan.schroeder@gmail.com twitter: @hassan

:slight_smile:

Sorry, Hassan. I didn't see your response yet. But until I left to here, I read the whole thread.

Thanks for the answer, and let me ask you one thing then:

What's the basic procedure for local development and deployment of the tables? Do you suffer the database-agnostic field definitions of the local development environment (db:create and db:migrate); or, wanting to test with real field definitions, to you go ahead and refine your table declarations locally, as you would in what you deploy to the server?

Thanks,

mike

I'm not sure what you mean by "real field definitions" -- can you go into a little more detail? What is it you think you can't do within the scope of migrations?

And "deploy to the server" can cover a lot of different scenarios, so you might want to describe what you mean by that, too.

Hassan Schroeder wrote:

What's the basic procedure for local development and deployment of the tables? Do you suffer the database-agnostic field definitions of the local development environment (db:create and db:migrate); or, wanting to test with real field definitions, to you go ahead and refine your table declarations locally, as you would in what you deploy to the server?

I'm not sure what you mean by "real field definitions" -- can you go into a little more detail? What is it you think you can't do within the scope of migrations?

And "deploy to the server" can cover a lot of different scenarios, so you might want to describe what you mean by that, too.

-- Hassan Schroeder ------------------------ hassan.schroeder@gmail.com twitter: @hassan

1. "Real field definitions":

There's a big difference between scaffolding:

script/generate scaffold tablename amendment_id:integer first_name:string middle_name:string last_name:string anonymous_signature:boolean email_address:string email_bounces:integer referrer_email:string referrer_id:integer address1:string address2:string city:string c_code:string state_province_or_territory:string zip_or_postal_code:string comment:string

And the usual SQL route to creating a table:

CREATE TABLE tablename (id integer NOT NULL AUTO_INCREMENT PRIMARY KEY, ip integer, amendment_id integer, first_name varchar(25), middle_name varchar(25), last_name varchar(30), anonymous_signature tinyint, email_address varchar(100), email_bounces tinyint, referrer_email varchar(100), address1 varchar(100), address2 varchar(100), city varchar(25), c_code varchar(2), state_province_or_territory varchar(2), zip_or_postal_code varchar(10), comment varchar(125), created_at datetime, updated_at datetime) ENGINE = INNODB;

In the former, you're not even passing the information to the environment to build the tables you want. Of course then, it's critical to convey your intended design to deployment. A varchar(25) (intended field designation) requires 1/10th the disk resources that a string field (which translates to varchar(255)).

Migrating the database agnostic definitions of scaffolding then is not producing the intended schema result.

2. Thus the concern about migrating the scaffolding schema's limited means of defining an intended schema with automated deployment processes, which still lack even the data from which your intended schema could be deduced. You've never provided it.

So, what I'm asking is, how do you usually handle these issues in RoR? Do you just tolerate the consequences of scaffolding; or do you test and develop on your intended schema; and, if you modify the schema to your intentions with additional tools, is that preserved or destroyed by further migrations?

My inclination is to want to revise/refine the resultant schema to my intended table design. I'm also concerned about how RoR deployment might inadvertently affect my schema on a destination server. How do you ensure migrations to a production table don't affect the intended schema of installed tables?

These are regular concerns for anyone with a background in RDBMS, just now getting their feet wet in RoR.

Thanks,

m

Let it go, at least for a short vacation :slight_smile:

Do a quick search on "premature optimization".

Consider that the whole point of an agile framework like Rails is to facilitate getting features to customers, quickly.

Having to optimize your app to handle too-many-page-views is a good problem, but you can deal with it closer to the actual event.

Hassan Schroeder wrote:

These are regular concerns for anyone with a background in RDBMS, just now getting their feet wet in RoR.

Let it go, at least for a short vacation :slight_smile:

Do a quick search on "premature optimization".

Consider that the whole point of an agile framework like Rails is to facilitate getting features to customers, quickly.

Having to optimize your app to handle too-many-page-views is a good problem, but you can deal with it closer to the actual event.

-- Hassan Schroeder ------------------------ hassan.schroeder@gmail.com twitter: @hassan

No sarcasm intended... but I'm not about to buy the idea that "premature optimization" *is* the root of all evil; nor that starting out with obtuse translations of data types which broadly impose a ten fold *overusage* of resources is "agile," when the necessary definitions of the field types are obvious from the start, and "agility" (IMO) would be to broadcast those obvious definitions to every phase of your design -- at least *so that, with great grief, you aren't forced to revise them later. After the short vacation then, what's the solution? We sit around at our desks and exalt in the fact we proved a theory of "premature optimization?"

No way. If it's worth doing; it's worth doing right. We're not talking about "premature" optimization. We're talking about starting out with a concept which will get us there with the least stress all the way. That goal isn't a challenged to be deferred. It's a fundamental and indispensable building block of *not* making a small job a lifetime job.

I'm not saying this just to argue. I don't care if you dispense with database design right out the gate, even as it's such a basic issue that any reasonably skilled RDBMS developer can envision that design right out the gate (with little downstream refinement required at least)... and *real agility* would be, to record that prescription in their "agile framework" *once*, from the beginning, so that it's a job done, *and* they *can* proceed "agilely" from the beginning.

Let's not argue about this; let's get at the answers -- become truly agile as agile can be.

No one betters the path to the end by purposely starting off with a wrong principle, which a) simply multiplies work at least because it has to be straightened out somewhere; and which b) circumvents a whole set (usually a huge one) of facts which as a consequence, are never "tested" under real conditions. The assumption that the latter is "premature optimization" would be a huge contributing factor to what is wrong with software today. How does my "agile" application react to 3-character input to an intended (and obvious) 2-character field, when my "agile" (incoherent?) development environment simply won't accept (or will it, which was my question) the 2-character field definition -- instead translating it to a whopping 255 characters? (!)

The answer to my question should not only be easy... it should be a fundamental skill. You mean nobody cares? A field has to handle 2-character country codes for instance, and every "agile" developer simply lives with 255-character, auto-translated field definitions, to "deal with" that later -- on a living system?

No way.

I was hoping you were going to tell me that all I have to do is, instead of:

  def self.up     create_table :amendments do |t|       t.string :c_code       t.string :c_name       t.integer :tran_id       t.integer :priority

      t.timestamps     end

I could replace this with some syntax for executing SQL (which I'm about to look up all the same):

  def self.up     create_table :amendments do |t|       t.[execute_sql] :[CREATE TABLE amendments (id integer NOT NULL AUTO_INCREMENT PRIMARY KEY, c_code varchar(2), c_name varchar(35), tran_id integer), priority tinyint, created_at datetime, updated_at datetime) ENGINE = INNODB;]     end

So this would fit my concept of "agility," because the design is obvious, and because we do this obvious chore once -- eliminating the unnecessary downstream work.

My concerns about this still are, does our agile framework want/insist upon creating the timestamps fields (which this SQL Create would produce)? Does the framework itself produce id field also? Do we truncate these fields our SQL statement? Can I just change the template code to such an SQL statement and BE agile?

That's my question. I don't want to start out, obviously needing to plan for 2-character fields, and suffering whatever downstream ramifications of a "agility" which requires that I alter my table structure to an obvious, intended structure (as opposed to "premature optimization")... with all the resultant, unnecessary difficulties of doing so on a "production" database, *hobbled* by a concept that declaring and working with mere obvious principles is "premature optimization."

So let's not argue about this. I should be able to *start* with my intended design far faster than I can *eventually* modify a wrong design -- imposed in the name of "agility" -- to the right one (which in the least, is twice the work).

Regards,

mike

If you don't like the field definitions generated by the scaffold then you can edit the migration before you run it, or run another migration later. Is there anything in your sql table creation that cannot be specified in migration code?

Colin

In the case of MySql I believe that the length of a varchar only specifies the maximum length, it does not affect the actual space used in the db as the data is stored as variable length records. I don't know about postgresql.

Colin

Colin Law wrote:

In the former, you're not even passing the information to the environment to build the tables you want. Of course then, it's critical to convey your intended design to deployment. A varchar(25) (intended field designation) requires 1/10th the disk resources that a string field (which translates to varchar(255)).

In the case of MySql I believe that the length of a varchar only specifies the maximum length, it does not affect the actual space used in the db as the data is stored as variable length records. I don't know about postgresql.

Colin

If that were the case, there would be little if any reasonable purpose in specifying VARCHAR() length. What you say may apply to the various text field types, as a buffer of the maximum length would handle a field of any length up to the maximum length... and the data could (best) be stored as the variable length stream -- thus conserving disk space.

As to your preceding post:

If you don't like the field definitions generated by the scaffold then you can edit the migration before you run it, or run another migration later.

No can do:

Scaffold will accept syntax like this:

script/generate scaffold amendment c_code:string c_name:string tran_id:integer priority:integer

It will not accept syntax like this:

script/generate scaffold amendment c_code:varchar(2) c_name:varchar(35) tran_id:tinyint priority:tiny int

Nor can you change the field prescriptions in the xxxxdatetimexxxx_amendments.rb schema.

Is there anything in your sql table creation that cannot be specified in migration code?

All of the above.

Worse, I have to support huge traffic and humongous amounts of data which make it absolutely essential to conserve disk volume. If deployment happened to translate these incoherent field definitions to my actual production tables, I'd be in huge trouble.

But these are just simple, basic concerns for any implementation.

Does anybody handle these issues?

I'm not asking for the sake of a discussion. Here's something like what I would like to do in the xxxxdatetimexxxx_amendments.rb schema for instance:

class CreateAmendments < ActiveRecord::Migration   def self.up     create_table :amendments do |t|       connection = ActiveRecord::Base.connection()       connection.execute(CREATE TABLE amendments (id integer NOT NULL AUTO_INCREMENT PRIMARY KEY, c_code varchar(2), c_name varchar(35), tran_id integer), priority tinyint, created_at datetime, updated_at datetime) ENGINE=InnoDB;)     end   end

So this is a matter of a simple question:

Is it feasible to do things this way?

To answer that question to the standards the answer will be tested means we know what the ramifications and requisites of doing so are. Do I for instance need to specify the id field and "timestamps" fields (which are actually of course, DATETIME values)?

This sort of a technique is what I envisioned having to do to preserve the goals of my design schema. What we're calling "agile" is abstracting the processes so far away from us, that the abstraction is a huge interference against doing the job right -- unless of course someone can answer this simple question: Is it feasible to do things this way?

If not -- that's certainly not agility. End of story.

If so -- how? What do we have to do to negotiate our intentions throughout design, testing, and deployment?

Regards,

mike

As others have posted, modify your migration file before running it if you are concerned about the size of varchars.

For example, you can do the following,

def self.up   create_table :mytable, :force => true do |t|      t.string :field_name, :null => false, :limit => 50, :default = "Some value"      ...   end end

Look into the foreign_key_migrations gem if you want to setup PK/FK relationships in your migrations. I have not discovered anything I can't handle in a migration for table creation/alteration.

E. Litwin wrote:

As others have posted, modify your migration file before running it if you are concerned about the size of varchars.

For example, you can do the following,

def self.up   create_table :mytable, :force => true do |t|      t.string :field_name, :null => false, :limit => 50, :default = "Some value"      ...   end end

Look into the foreign_key_migrations gem if you want to setup PK/FK relationships in your migrations. I have not discovered anything I can't handle in a migration for table creation/alteration.

Primary and foreign key migration is supported. It's not the issue.

Have you found that you can migrate varchar() field types as varchar() field types, instead of strings?

If so, how? That would answer the question.

E. Litwin wrote:

They are apparently vital to you; do not assume that's true of every developer and every situation.

Yes, you can use your own custom SQL in a migration. If you are so focused on this level of control I have to wonder why you haven't (apparently) read the API doc for ActiveRecord::Migration.

The downside (of at least the examples you've used) is you've tied yourself to a specific DB vendor's product. I've currently working on more instances than not where multiple vendors are involved.

Leaving the DB description in Rails' abstract form means I can work with those different DBs easily. That's much more important to *my* projects than saving 253 bytes of disk space somewhere.

FWIW,

Hassan Schroeder wrote:

So the question remains, how do you support migration of these vital field types?

They are apparently vital to you; do not assume that's true of every developer and every situation.

Yes, you can use your own custom SQL in a migration. If you are so focused on this level of control I have to wonder why you haven't (apparently) read the API doc for ActiveRecord::Migration.

The downside (of at least the examples you've used) is you've tied yourself to a specific DB vendor's product. I've currently working on more instances than not where multiple vendors are involved.

Leaving the DB description in Rails' abstract form means I can work with those different DBs easily. That's much more important to *my* projects than saving 253 bytes of disk space somewhere.

FWIW, -- Hassan Schroeder ------------------------ hassan.schroeder@gmail.com twitter: @hassan

Of course, you recognize the importance then -- "premature optimization" is not about accounting for your issues to the depth of the first 253 bytes... half a dozen columns of half a dozen tables -- some of which may contain up to dozens of millions of records. It's not about 253 bytes. It's about 253 bytes per field, per so many records, per so many tables.

As to why you're merging tables from multiple vendors, I have no idea. But certainly still, unless a vendor doesn't support a vital field type (in which case, your vendor would be chosen for your needed support), there's no limitation imposed by specifying a data schema you would want to preserve for a great variety of reasons; and certainly neither, would a RoR ability to sustain a needed schema inhibit your development capabilities.

On the contrary, *only if you accept the cost of 253 bytes on whatever the scale, at every turn*, would the ability to sustain an intended data schema inhibit your project development -- and then, only to the degree of specifying the translated field types, instead of the specific types most of us require for better reasons.

Of course, you recognize the importance then -- "premature optimization" is not about accounting for your issues to the depth of the first 253 bytes... half a dozen columns of half a dozen tables -- some of which may contain up to dozens of millions of records. I

I do lots of small projects, where tables may contain, oh, *dozens* of records, maybe *hundreds* :slight_smile:

Your concern about disk usage simply doesn't apply.

As to why you're merging tables from multiple vendors, I have no idea.

I'm not talking about "merging tables", I'm talking about running the app on different DBs.

That's simply incompatible with your examples, and is why abstraction of DB characteristics is a good thing for many developers.

Hassan Schroeder wrote:

I have to wonder why you haven't (apparently) read the API doc for ActiveRecord::Migration.

FWIW, -- Hassan Schroeder ------------------------ hassan.schroeder@gmail.com twitter: @hassan

It's your assumption that I'm not studying the API documentation. I've had the pages opened for days; and I just posted some material from the API documentation. You point would be, that if you had studied the API documentation, and that you knew how it was possible to implement the usual goals of database schemas, you might cite what I've missed.

I'm waiting for a reply from a developer in India on one possibility (he might know).

The closest I've found in the API documentation so far is, type_to_sql in ActiveRecord::ConnectionAdapters::MysqlAdapter.

Here's its source implementation:

     # File lib/active_record/connection_adapters/mysql_adapter.rb, line 493 493: def type_to_sql(type, limit = nil, precision = nil, scale = nil) 494: return super unless type.to_s == 'integer' 495: 496: case limit 497: when 1; 'tinyint' 498: when 2; 'smallint' 499: when 3; 'mediumint' 500: when nil, 4, 11; 'int(11)' # compatibility with MySQL default 501: when 5..8; 'bigint' 502: else raise(ActiveRecordError, "No integer type has byte size #{limit}") 503: end 504: end

That's a far cry from handling all the field types, isn't it?

Or is it?

You know how to implement that to produce VARCHAR() ranges from 1..255, even as it isn't important to you?

Why make this a spitting match? Either you know how to do this or you don't.

Hassan Schroeder wrote:

It's your assumption that I'm not studying the API documentation.

Just from the fact that you seemed in doubt whether you could use raw SQL in a migration; the doc has an example of doing just that.

You know how to implement that to produce VARCHAR() ranges from 1..255, even as it isn't important to you?

Why make this a spitting match? Either you know how to do this or you don't.

Who's making it a "spitting match"? I do know, and it's in the document I *already pointed you at* -- the ActiveRecord::Migration API.

Simple example:

class CreateDogs < ActiveRecord::Migration   def self.up     create_table :dogs do |t|       t.string :name, :limit => 100       t.string :nickname, :limit => 242       t.string :weight, :limit => 5       t.string :country, :limit => 2

      t.timestamps     end   end

  def self.down     drop_table :dogs   end end

show create table dogs\G

*************************** 1. row ***************************        Table: dogs Create Table: CREATE TABLE `dogs` (   `id` int(11) NOT NULL AUTO_INCREMENT,   `name` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,   `nickname` varchar(242) COLLATE utf8_unicode_ci DEFAULT NULL,   `weight` varchar(5) COLLATE utf8_unicode_ci DEFAULT NULL,   `country` varchar(2) COLLATE utf8_unicode_ci DEFAULT NULL,   `created_at` datetime DEFAULT NULL,   `updated_at` datetime DEFAULT NULL,   PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci 1 row in set (0.00 sec)