How do we determine when some Rails logic (e.g., performing string manipulations) is becoming a bottleneck on server scalability and should be moved client-side to JS?
When dealing with bottlenecks, we have to measure first (see below) before doing any optimizations. Usual suspects in the Rails world:
Database bottlenecks -- you should avoid unnecessary round-trips to your database because it increases request latency. A commonly occurring problem in the Rails world is N+1 queries problem. N+1 happens when you iterate over a collection of objects and load an association for each of the objects, causing N+1 SQL queries (hence the name). The way to avoid the N+1 problem is to use eager loading (i.e. by using
Memory bottlenecks -- Rails is a memory hog. Make sure your server has enough RAM (1GB is a sane starting point), otherwise your app will use swap memory, which will negatively affect performance. If you're running a multi-threaded process (Sidekiq or Puma), you can reduce memory bloat by setting
MALLOC_ARENA_MAXenvironment variable to
2(i.e. that will reduce glib's memory arena count). FWIW, Heroku sets that environment variable by default on newly created apps.
There are several ways to benchmark Rails apps:
Use Rails application performance monitoring (APM) to monitor performance over time. My favorite options are Skylight and AppSignal (not affiliated with them). NewRelic or Scout is fine as well, but they'll cost you an arm and a leg. If you're on Heroku, use their built-in metrics + Ruby language metrics.
Use Bullet gem to identify N+1 queries.
String manipulations are rarely the bottleneck. If you're doing something fancy, where you create a lot of strings, which in turn negatively affects request latency, you should push that workload off to a background job.
Keep in mind that if you have mobile users, and you perform a lot of client-side operations, it will drain their battery like there's no tomorrow.
It has to be determined on a case-by-case basis, but generally speaking, I'd recommend against pushing logic from the server-side to the client-side. More often than not, your database will be the bottleneck, not Ruby or Rails.
Pro tip: to get memory for free, freeze your strings (frozen string literal pragma was introduced in Ruby 2.3+). By doing so, you'll allocate fewer strings, which will save you some memory. Relevant article that makes a case for freezing strings.
Should we upgrade to Rails 6.x or Rails 5.x?
The general version of this question would be: "should we pay back our tech debt?" and my answer is yes. You should upgrade to Rails 6.x if you can afford it OR it is actively holding you back (i.e. you can't ship new features as fast as you'd like).
It's always faster, cheaper, and less painful to upgrade Rails when a new version comes out, instead of postponing it to the future.
The typical way of upgrading Rails is to upgrade one minor version at a time. In your case it will look like this: 3.2.12 -> 220.127.116.11 -> 4.0.13 -> 4.1.16 -> 18.104.22.168 -> 22.214.171.124 -> 5.1.7 -> 126.96.36.199 -> 188.8.131.52
Note that the more dependencies you have, the harder it'll be to jump between versions (i.e. some gems are unmaintained and don't receive upgrades to or get tested against newer versions of Ruby/Rails). Before starting an upgrade, I'd recommend having a look at your Gemfile and removing gems that you don't need.
Having a decent test coverage will expedite the process of upgrading. Now, if you don't have any tests at all, I'd recommend starting with integration tests because they will give you the most bang for the buck (i.e. they touch a wider stack than unit tests).
Typically, each new major version of Rails bumps up the minimum required Ruby version, so you'll have to take care of that as well.
Also, not to scare you off or anything, but your version of Rails has dozen of security vulnerabilites. IANAL, but this might be problematic if you have an EU presence or customers, which should trigger GDPR compliance.
Pro tip #1: to see which dependencies need to be updated, run
bundle outdatedfrom your application's root directory. I run this command every day I work on a codebase so that I could stay on top of updates.
Pro tip #2: if you can't be bothered to check updates manually every now and then, I highly recommend installing Dependabot to your Github repo. It will scan your Gemfile, and will create pull requests when new security patches are out (Obligatory warning sign: don't do that on your Rails 3.2.12 project because it will flood you with pull requests).
Pro tip #3: Rails provides a
rake rails:updatetask that helps to streamline the upgrade process by offering you to update existing files and introducing new configs (so that you could pick and choose).
Pro tip #4: http://railsdiff.org is a handy website that will show you how Rails had changed between versions.
Right now we only have one rails server -- how do we estimate how many simultaneous users our server can handle?
The answer depends on many factors, but I'll try to give you a ballpark estimate. I'm assuming you're using the app server (Puma, Unicorn, or Passenger) that sits behind a web server (nginx, Apache, or Caddy).
Your web server handles SSL/TLS termination, serves assets, protects against slow clients, and proxies requests back to your app server, amongst other things.
The way most app servers work in Ruby world, is they create a single OS process and copy it (AKA fork) multiple times. These child processes (Puma calls them workers), serve your HTTP requests back to your web server, which in turn sends the response to clients.
There's a subtle difference between a process and a thread. Every new process (worker) eats more RAM but has the advantage that it can process requests in parallel. Due to Ruby's implementation, only one thread at a time can execute Ruby code (if you're curious how this works, read up on Global Interpreter Lock (GIL), or in Ruby's case -- Global VM lock (GVM)). In general, the more worker processes you can afford per server, the better (as I mentioned before, you have to be mindful of your RAM).
Roughly speaking, the number of simultaneous (concurrent) requests you can handle depends on the number of child processes (workers) times the number of threads each process has. For example, if you have a Puma server with 5 workers and each of them has 4 threads, you will be able to handle 20 requests simultaneously (assuming your database is capable of handling 20 concurrent connections).
Keep in mind that more processes and threads might do more harm than good. The only way to be sure is to keep experimenting.
Couple tips to get more performance on your single server:
Use CDN to serve your static assets. I assume you already minify and compress (gzip) your assets. Prefer a CDN provider with HTTP/2 and TLS 1.3 support (Cloudflare, AWS Cloudfront, and Fastly are good choices). You can cache your assets aggressively because Sprockets inserts hash of the content into a file name (i.e. if a content of file changes, so will its name).
Add indexes to your database. Rule of thumb: a field you're using in the
WHEREclause, it makes it a good candidate for an index. For example:
SELECT * FROM users WHERE email = '?'. In this case,
Take stuff out of request-response cycles. For example, sending an email or making requests to third-party APIs should be done in the "background". I highly recommend offloading most, if not all your I/O-bound tasks to Sidekiq (e.g. sending an email, fetching an RSS feed, receiving data from Stripe)
Keep your Ruby and Rails versions up-to-date. Ruby's creator Matz wants Ruby 3 will be 3 times as fast as Ruby 2, and they've been making steady progress on that.
If you're using a managed database provider (e.g. AWS RDS or Heroku Postgres), make sure that your app server is in the same region/availability zone.
When in doubt, try to do less work.