Rails + kubernetes slow boot time

Hi, when i implement rails to k8s, it seems takes too slow to boot up till http server is fully ready. there are 2 case currently happens:

  1. with existing ec2 node → takes 20-30s to ready for serve http requests
  2. non existing ec2 node → the pod will remain in pending state around 30s. new node comes in, and the next 60s pod is ready to serve http requests. so the total is around 90s.

i already do these things:

  1. Profiling my boot time
  2. Removing unused gems
  3. Removing specific rail component, such as action-cable that unnecessary in our prod
  4. Using smaller ruby image → ruby:3.0.0-alpine

so, the result not significant, only reducing around 4 seconds for both cases. im targeting

FYI, this is my setup:

ruby '3.0.0'
gem 'rails', '~> 6.0.3', '>= 6.0.3.2'
gem 'puma', '~> 4.3'

For running the server im using puma clustered mode, that run:

1 worker, 5 threads

Notes: above setup is for my PoC only, so i need to test it on our staging server first. if we successfully achieve our target for faster boot time, we can proceed to production environment.

and, one last thing. just curious. how fast/ how long do you guys for your rails in production environment to boot up when scaling happens (on peak load) ?

30 seconds is normal boot time. The goal is to keep it under 30 seconds. You can optimize the Docker layers for caching and quicker builds.

You can delay loading of gems until they are needed:

gem 'foo', require: false

In the initializer foo.rb:

require 'foo'

You need to identify which gems is used most frequently in your app. Check the documentation to see if the gem recommends lazy loading.