I think the article is reasonable, and I saw DHH removed Capistrano from Rails 6 Gemfile.
I know it’s difficult, but is there any chance to abstract deployment into “Active Deployment”? or after removing Capistrano, which way is Rails team suggests?
I think we should give freedom to developer to use it. Nowadays not everyone setup pure linux server with that automate deployment. Some people use Heroku, Docker or other service to deploy. So if we add active deployment it may increase the weight of the Rails framework.
If we shift deployment part to some kind of DevOps guys it will decrease the work load of the Rails core team. Allow to them to focus on feature development.
I’m interested in exploring better defaults for deployment going forward. Many of the fundamentals have improved dramatically, whether they be simply Docker, Kubernetes, cloud providers at large, or Heroku. So we’ll explore more in this space, but there’s nothing concrete at the moment.
Maintaining Rails application both on Heroku(cloud) and on-premises I completely understand frustrations from the post above. That being said I don’t see easy way for Rails to manage external services such as; Database, Redis, Elastic, etc… Maybe one area where Rails could help is ability to “package” the app.
There is an interesting project called pkgr(https://github.com/crohr/pkgr) that can help, but still I think improvements in this area can be explored. Also one nice benefit of “packaging” could be securing(encrypting) the app package.
Thanks for sharing that article. I agree that app deployment could benefit from Rails’ convention over configuration philosophy.
There may be some low-hanging fruit for new apps, though. For example, here are a few of the things on my to-do checklist before deploying a new app for the first time:
Modify database.yml to use ENV[“DATABASE_URL”] in production.
Set Rails.application.default_url_options[:host] in production.
2b) Set Rails.application.default_url_options[:host] = "localhost" and [:port] = "3000" in development and test.
Set Rails.application.config.force_ssl = true in production.
3b) Set Rails.application.default_url_options[:protocol] = "https" to match force_ssl.
Set Rails.application.config.action_mailer.default_options[:from] = "no-reply@#{Rails.application.default_url_options[:host]}".
Rails might be able to take care of these things for new apps, out-of-the-box. To elaborate…
The default database.yml mentions ENV[“DATABASE_URL”] in a comment, but uses :database, :username, and :password placeholder values instead. Maybe these could be swapped, i.e. use ENV[“DATABASE_URL”] by default, and mention in a comment the option of setting values individually. Or, possibly use ENV[“DATABASE_URL”] with an interpolated fallback string, e.g. url: <%= ENV['DATABASE_URL'] || "..." %>.
Maybe Rails could designate an official environment variable that would populate default_url_options[:host], e.g. ENV[“APP_DOMAIN”] or ENV[“APPLICATION_HOST”]. This does shift the problem from modifying a Rails config file to setting an environment variable, but perhaps, eventually, deployment scripts and hosting providers (e.g. Heroku) could standardize and take care of that automatically.
2b) A default default_url_options[:host] in development and test would allow url_for (et al) to just work when previewing or testing mailer templates.
In a post-Let’s Encrypt world, force_ssl = true might be a reasonable default. Heroku also offers free SSL certificates on paid dynos, as does AWS if you use an Elastic Load Balancer.
default_options[:from] also affects gems that send mail, like Devise. So providing a default potentially saves multiple extra steps of configuration. And “no-reply” seems like a standard choice.