what is best practice for my case regarding admin / rails users on the server: vps, rails on apache2 via proxy, systemd puma service, git push from local machine to deploy. ( I dont use docker )
on the server (I have one admin sudo user) - do I add a user that owns the rails app and the gitrepo? - does he have login or sudo? - where can I read about this? I want to do the right thing, thank you.
deploying is just push from local and that lands in the git repo on the server goes to the app in /var/www/ and restarts the puma service via post-receive hook.
Rails has been pushing Kamal as deploy mechanism. I’ve tried, it worked and I’m now convinced.
As the machine setup goes, I use cloud-init to install the packages and create the users (my rails user uses an 1000 UID) and I ensure the /srv dirs are writable by that user in could-init.
If you’re starting from scratch or trying to evaluate a new method I would definitely look at kamal.
Thank you for taking the time. I looked at Kamal and I think it needs Docker to work? My rails site is really small and uses sqlite3 in production so I am happy with just local machine push-post-receive-restart-puma and done.
I was just wondering if my setup is ok - one gituser that owns the rails app and has sudo passwordless for that one puma.service restart command and of course interested how others setup their rails admins etc in similar scenarios. (apache2 systemd proxy puma.service)
This is how I’d do it: create a dedicated Unix user per Rails app (e.g. myapp) to run the app processes. Use a separate deploy user with SSH access for deployments. The app user should have no SSH access and can use /usr/sbin/nologin as shell. Run app services (e.g. Puma, Sidekiq) as the app user via systemd or Docker (e.g. with Kamal).
Ask yourself, what would an attacker gain from remote command invocation if given access to the user running your Rails application? On an AWS EC2 instance, this could mean they have access to any AWS commands that your instance role has permissions to execute. This would have far wider implications than if you are running on a machine out of your closet that isn’t connected to anything important. You probably already likely know that security comes in layers and your strategy should be based on the risks involved and the level of commitment you have to mitigate those risks. Creating a new user with hardened permissions is low commitment and would yield significant security benefits. It is something that you typically only have to do once and prevents something like Open3.capture3("rm -rf /") if an attacker has somehow managed to figure out how to exploit a vulnerability in your Rails application to execute such a command [very unlikely in a standard CRUD app].
Yes, Kamal does require docker but the beauty of Kamal is that it handles all that for you. I have used Kamal for Rails and Go applications with wild success. Kamal makes it easy for you to host multiple Rails applications on a single VM. I highly recommend you check it out.
I run Puma as the www-data user, but also have a separate passwordless user for administering the app, and pull deployments from a dedicated git branch. This user has minimal permissions and no shell (sudo adduser --shell /sbin/nologin myappuser). Basically only what’s required to pull from git, precompile assets and run the Rails console. To perform deployments and work with the app I ssh in as myself and sudo su myappuser.
One thing I remember struggling with was how to give Puma access to the contents of master.key - in the end I just made a copy in config called master_key.env (also added to .gitignore, tyvm) in which the contents of master.key is prepended by RAILS_MASTER_KEY=, to format it as an ENV variable. This allows me to load the master key as an environment variable in the Puma unit file with EnvironmentFile=/var/www/myapp/config/master_key.env.
As I run Puma with sockets I also have a /etc/systemd/system/puma.socket file, which puma.service requires:
[Unit]
Description=Puma HTTP Server Sockets
[Socket]
ListenStream=/var/www/myapp/tmp/puma.0.sock
ListenStream=/var/www/myapp/tmp/puma.1.sock
[Install]
WantedBy=sockets.target
The config/puma/production.rb file doesn’t do much, most important is the bind_to_activated_sockets directive:
threads_count = ENV.fetch("RAILS_MAX_THREADS", 3)
threads threads_count, threads_count
bind_to_activated_sockets "only"
# Allow puma to be restarted by `bin/rails restart` command.
plugin :tmp_restart
# Run the Solid Queue supervisor inside of Puma for single-server deployments
plugin :solid_queue if ENV["SOLID_QUEUE_IN_PUMA"]
# Specify the PID file. Defaults to tmp/pids/server.pid in development.
# In other environments, only set the PID file if requested.
pidfile ENV["PIDFILE"] if ENV["PIDFILE"]
environment "production"
stdout_redirect "log/access.log", "log/error.log", true
Finally I have an nginx front-end that directs requests to the myapp socket(s), the upstream being configured in /etc/nginx/conf.d/upstreams.conf:
upstream myapp {
server unix:/var/www/myapp/tmp/puma.0.sock;
server unix:/var/www/myapp/tmp/puma.1.sock;
}
And the front-end in /etc/ngxin/sites-available/myapp.conf (just an excerpt here, of the relevant bits):
thank you for taking the time - I deploy with git too but the ssh is only on ipv6 or a wg tunnel from one ip. So now I settled for one user that rules the app, the git repo and is allowed the single sudo restart the systemd puma.service command without password - and that user can login I think - let me check … yep - he can login. I dont use solid, puma sockets or nginx and the production database is sqlite. But I now see I am quite a noob - lots to learn.