How We Fell Out of Love with Next.js and Back in Love with Ruby on Rails & Inertia.js

by oqtey
How We Fell Out of Love with Next.js and Back in Love with Ruby on Rails & Inertia.js

This is part 1 of a series documenting Hardcover’s Alexandria release. We recently migrated our codebase from Next.js to Ruby on Rails, and it’s been amazing so far! It was a learning experience, and I’m excited to share some of our takeaways. I’ll link each article here as it’s written.

  • Introducing Alexandria: Faster, Smoother, Smarter
  • Part 1: How we fell out of love with Next.js and back in love with Ruby on Rails & Inertia.js
  • Part 2: Moving from the Cloud to the Server – Google & AWS to Digital Ocean and Kamal
  • Part 3: How we use Puppeteer to generate OpenGraph images
  • Part 4: Speeding up Ruby on Rails with Solid Cache, server side rendering, Sidekiq, and Brick
  • Part 5: Securing and Speeding up our API Server

Today’s focus is on the main reason for the move from Next.js to Ruby on Rails. This is the first question anyone asks, and the most important one. So let’s dive into it.

Just a side note: this is going to be a software development related post, not a book related posts. If you’re here for the Book Vibes, I’d encourage you read about the release first.

How We Got Here

When Hardcover started, I was primarily a Ruby on Rails developer. I had experience with JS frontends, but Rails was my jam. I’ve been building things in it since before Rails 1.0, worked at multiple startups that use it, built courses to teach it, spoken at meetups about it, and been to conferences.

I’ve been all in with Rails for a while.

In the late 2010s, single page applications broke out as an alternative way to create applications. It was the JavaScript framework Cambrian Explosion – with jQuery as the common ancestor (in spirit at least).

We had Backbone.js, Ember.js, Angular.js, Vue.js, React.js and many more. I’ve shipped code in each of these frameworks that have been seen by millions people (which is easier than it sounds! I’ve heard you can just do stuff – especially on the Internet).

The biggest point of friction with these frameworks was always how they would integrate with Ruby on Rails. Rails has gone down a different route for it’s sanctioned front-end path – using Islands Architecture, Stimulus Controllers and other solutions (which are all preferred to Rails previous .rjs syntax).

I’ll be honest: I haven’t spent the time to fully learn this architecture. I can’t criticize it, other than that the code looks weird to me. It does have advantages over JS frameworks, but you’re still writing JS either way.

For most of the past decade, using these popular frameworks in Rails has meant giving up Server Side Rendering. In 2021 when I started Hardcover, that one fact alone meant it was out. We knew we’d need to rely on SEO to find new readers and I didn’t want to sacrifice that.

Enter Next.js

Right as I was deciding on what framework to use that would allow for SEO + JavaScript, Next.js announced version 9.5 – adding incremental static regeneration and other features that would allow us to render pages with SSR and expire the cache. It sounded perfect.

I setup the initial architecture of Hardcover using Next.js with the pages router, hitting a GraphQL API (Hasura) for getting data, and caching as much as possible using Incremental Static Revalidation. The first load was often a bit slow, but caching helped.

One decision I made during this phase would come back to haunt me: cache facts on the server, fetch user data in the browser.

For instance, if you’re viewing a Readers profile, everything we show there would be fetched client-side from our GraphQL API. Our API returns different results depending on your relationship with the reader you’re viewing. Someone can mark a book, list or goal as public, private or only people you follow – which all determine what other readers can see.

This approach left that logic to the API, allowing the front-end to be dumb and just show whatever was returned. It worked.

Behind the scenes we were still using Ruby on Rails for the entire backend. This is what the architecture looked like at the time.

From 2021 to 2022 we continued on building like this. Some pages were getting slower, but we were able to ship fast. We continued innovating, but the app kept getting slower. As more readers joined, we were hitting our API servers hard, and we couldn’t cache anything at that level. If we wanted server side caching, we needed to move that to the server.

An Wild App Router Appears

In mid-2022, Next.js launched the App Router – a new way to server side render Next.js apps. I loved the idea. It felt more like Rails which I was used to. We immediately starting migrating Hardcover to this approach.

We switched to fetching all data about facts and users to be fetched on the server using the app directory and React Server Components. This went surprisingly smooth! There were some rough days with next-auth and Apollo to get everything working, but we made it through.

At this point, we used the users API token on the server side to make request to the API – the same token used on the client side. This ensured they would only see data they had access to. In my misunderstanding of Next.js’s caching, I thought those requests would be cached. Since Apollo’s GraphQL requests are POST requests, they weren’t (😱).

Next.js had (has?) no clear tools to understand what was being cached. I could use log statements to debug this, but since Next.js was overriding fetch for their caching, all of my code would be run whether or not it was cached or not.

When the application was migrated from the Pages router to the App router in April 2024, it didn’t speed up as much as I hoped. Now I know it was because the cache wasn’t being used. Back then I didn’t know that. It wasn’t the massive improvement I was hoping for.

Two other things happened around this time that increased our frustration with Next.js and Vercel.

When we moved to the App router, our bill increased – which was anticipated. What we didn’t expect was for a Pricing Change the month we launched after months of building.

Our hosting bill grew from $30 in April to $142 by June, $354 in August. Hardcover was growing, but 10x cost increase in a few months was too much.

We tried using the @neshca/cache-handler Redis cache handler for Next.js – which did give us the most insight into what was being cached that we’d seen up until that point. However our bill continued to rise.

We migrated to Google Cloud Run to see if it would be cheaper. For the first month or two is was! Our Google Cloud Bills dropped to $311, and then $286. but then continued to climb to $524 in February 2025.

Side note: the big orange charge was when someone attempted to download every image from Google Cloud Storage, which resulted in us moving from Google Cloud Storage to CloudFlare R2.

This chart wasn’t looking great if we wanted to reach profitability as a team. Alongside this, our application had gotten significantly slower – both in development and on production. We’d started to using with code splitting for larger JS scripts, but our bundle size wasn’t budging without major changes.

On the local development side things were worse. As new team members joined, I’d have to apologize for how long it would take to load a single page – often up to a minute. That was all compilation time on the Next.js server for our test page.

Next.js must’ve been hearing this same feedback from other developers because they started working on Turbopack, which speeds up compilation. I tried this every month or so, but I was never able to get it to work with Hardcover. I hear it’s stable as of the latest version, but we were already deep into the migration by that point.

At this point, we had the following problems with Next.js:

  • Unclear caching, which would need a large rewrite to change.
  • Growing and unpredictable bills due to serverless architecture
  • Slow development speeds, making even small changes take a long time.

One additional problem wasn’t Next.js specific, but architecture related: I wanted to switch to getting server side data directly from a database connection rather than through GraphQL. I looked into Prisma, TypeORM and Sequelize to switch.

I had concerns about this approach with a serverless architecture. Our database has a connection limit, which meant we’d need a serverless limit anyways to make sure our database connection pool is sufficient. Either that or move to a cloud database like Neon, but at our DB size that would be $700/month minimum.

This was a realistic option to speed things up, but the slow development times and higher costs would have continued.

If Not Next.js, What?

By August 2024, after 3 years of development with Next.js, I wasn’t optimistic about the issues I’d encountered being addressed. I started looking into alternatives.

My goals for a “better” version were ambitious:

  • Continue to render everything on the server – we need SSR for SEO.
  • Switch to direct database connection for fetching all data.
  • Continue to use React.js for the front-end.

This led to two real options: Remix or Ruby on Rails. I looked into Remix for about one day before I realized the learning curve was more than I’d be comfortable with for this type of a migration.

When it came to running React with Rails, I found three options:

I gave each of these a try, building a proof of concept with each. There were parts of each I loved. I think react_on_rails has the chance at being the fastest of these for the user – but may need the pro version and some support.

Enter Inertia.js

Inertia.js landed in the sweet spot of performance, SSR and just getting out the way. If you follow me on BlueSky, you’ve no doubt heard me talking about Inertia over the last few months. 😂

I wasn’t expecting a Laravel project, which was inspired by Rails, to end up developing something I loved more than The Rails Way. DHH’s software choices have usually been in step with my own (emphasis on software), but we differ when it comes to TypeScript and the front-end. The direction Laravel went with Inertia.js is such a great choice, and I really enjoy building with this paradigm. Here’s how it works in practice:

Before breaking into the individual parts, let’s look at Inertia.js at a high level to get a better idea of what it even does. Here’s how that works in practice for a request – in this example, I’ll use the Hardcover homepage.

This page shows a bunch of static data, trending books, a prompt, a Hardcover Live and a few blog posts. All of this data can change whenever, but it makes sense to cache it. Here’s what that entire request looks like:

A Rails + Inertia.js Request

At the controller level we do exactly what you’d expect from a Rails application. We have a route which handles this endpoint:

config/routes.rb

Rails.application.routes.draw do
  namespace :clientverse, path: "https://hardcover.app/" do
    namespace :pages, path: "https://hardcover.app/" do
      get :home
    end
  end
endCode language: Ruby (ruby)

We also scope everything that’s shown to the user under a “clientverse” namespace. That allows us to have a pages_controller.rb which extends from a base_controller.rb for all Inertia.js generated pages.

That base_controller.rb is relatively small.

app/controllers/clientverse/base_controller.rb

module Clientverse
  class BaseController < ApplicationController
    include ApplicationHelper
    include ReduxHelper
    include UserHelper
    include ErrorHelper

    before_action :confirm_user
    before_action :confirm_onboarding
    before_bugsnag_notify :add_user_info_to_bugsnag

    helper_method :global_variables
    helper_method :metadata
    helper_method :html_attribute
    helper_method :theme

    inertia_share do
      {
        generatedAt: Time.now.to_i,
        pathName: request.path,
        metadata: default_metadata,
        rootState: root_store,
        userBookStatusMap: InertiaRails.optional { user_books_status_map },
        flash: flash.to_h || {}
      }
    end
  end
endCode language: Ruby (ruby)

Most of these helpers aren’t important for this example, but they’ve come in handy. The important one here is the inertia_share call. That allows us to add that data to every Inertia request that comes in. The userBookStatusMap includes the logged in readers status on every book, which we only use that when requested.

The pages_controller.rb extends from this.

app/controllers/clientverse/pages_controller.rb

module Clientverse
  class PagesController < BaseController
    def home
      render inertia: 'clientverse/pages/home', props: home_props
    end

    private

    def home_props
      Rails.cache.fetch("pages/home", expires_in: 1.hour) do 
        lives = WordPressService.new.lives(limit: 1)
        {
          featuredPrompt: PromptSerializers::PromptSerializer.one(Prompt.find_by(slug: "what-are-your-favorite-books-of-all-time")),
          trendingBooks: BookSerializers::BookBylineSerializer.many(TrendingBookService.for(start_date: 1.month.ago, limit: 12)),
          live: lives.empty? ? nil : WordPressSerializers::LiveSerializer.one(lives.first),
          posts: WordPressSerializers::PostSerializer.many(WordPressService.new.posts(limit: 3)),
          metadata: default_metadata
        }
      end
    end
  end
endCode language: Ruby (ruby)

When the root / route is accessed, this home method will be called. That will render the React.js component at clientverse/pages/home.tsx passing in the props returned from home_props.

The entire props are wrapped in a Rails.cache.fetch block. This decade (+) old feature of Rails still feels like magic. It’ll check the cache (in our case Solid Cache stored in Postgres) for the cache key “pages/home” within the expiration. If it exists, it’ll return that and never run the code in the block.

If that cache key doesn’t exist, or it’s past the expiration date, it’ll run the code, save it to cache and return it.

The end result is that loading the entire homepage only takes one query (if you’re logged out). This makes it super quick.

Ok, we know what to render for this specific page. But we need a place to render this! For that we use what you’d expect for any Rails app: the app/views/application_layout.html.erb file.

app/views/application_layout.html.erb


"en" class="h-full antialiased <%= theme === "dark" ? "dark" : "" %>">
  
    <%= metadata[<span class="hljs-symbol">:title</span>] ? <span class="hljs-string">"<span class="hljs-subst">#{metadata[<span class="hljs-symbol">:title</span>]}</span><span class="hljs-subst">#{metadata[<span class="hljs-symbol">:title_template</span>]}</span>"</span> : metadata[<span class="hljs-symbol">:title_default</span>] %><<span class="hljs-regexp">/title>
    
    <%= csp_meta_tag %>
    <%= csrf_meta_tags %>
    <%= inertia_ssr_head %>
    <%= vite_client_tag %>
    <%= vite_javascript_tag "application.tsx", async: true %>
    <%= vite_stylesheet_tag "application" %>
    <%= vite_react_refresh_tag %>
  >
  
    <%= <span class="hljs-keyword">yield</span> %>
  <<span class="hljs-regexp">/body>
></span><small class="shcb-language" id="shcb-language-4"><span class="shcb-language__label">Code language:</span> <span class="shcb-language__name">Ruby</span> <span class="shcb-language__paren">(</span><span class="shcb-language__slug">ruby</span><span class="shcb-language__paren">)</span></small></span>

This file still feels a bit magical to me. There are CSRF protection, hot module reloading, SSR hacks, and other custom scripts to work with React.

I expected to need a root element here, but that’s not the case. The entire page you render will be filled into the <%= yield %> section.

For what shows up there, that’s the file we rendered from the controller, in this case app/javascript/pages/clientverse/pages/home.tsx. Here’s what that looks like for us:

app/javascript/pages/clientverse/pages/home.tsx.

import Hero from "components/marketing/home/Hero";
import SubHeaderFeatures from "components/marketing/home/SubHeaderFeatures";
import TrendingBooks from "components/marketing/home/TrendingBooks";
import TrackBooks from "components/marketing/home/TrackBooks";
import { DefaultLayoutWrapper } from "layouts/DefaultLayout";
import AdditionalSections from "components/marketing/home/AdditionalSections";

function HomeIndex() {
  return (
    
"h-4" />
"h-2" />
"h-12" />
"h-6" /> </main> ); } HomeIndex.layout = DefaultLayoutWrapper; export default HomeIndex; Code language: TypeScript (typescript)

That DefaultLayoutWrapper is doing a lot of work of wrapping the the entire application. We have that code in almost every root page shown. This is one area of Inertia.js I haven’t found a good solution for. According to the docs, it’s supposed to be possible to set a default layout, but I haven’t gotten that to work (yet).

We’ve found it’s helpful to create a Type for everything that’s passed into React for each request as well. This could use Oj Serializers (my preferred serialization library), or written by hand. I prefer written by hand, that way the home.props.ts file can live right next to the home.tsx file.

app/javascript/pages/home.props.ts

import BlogPostType, { BlogLiveType } from "types/BlogPostType";
import { BookSerializersBookByline, PromptSerializersPrompt } from "types/serializers";

type HomeProps = {
  featuredPrompt: PromptSerializersPrompt;
  live: BlogLiveType;
  posts: BlogPostType[];
  trendingBooks: BookSerializersBookByline[];
}
export default HomeProps;Code language: TypeScript (typescript)

Side note: We use the Ruby types_from_serializers gem to generate TypeScript types for all serializers. That allows us to set a type in a serializer and see it across the entire stack. 🤯 I owe Máximo Mussini big for how much time this has saved (and he wrote vite_ruby too! A real Ruby Hero).

In this component we’re not actually accepting any of these props! We passed them, but we didn’t use them. We can use them wherever we need them. For example, in the component:

app/javascript/components/marketing/home/TrendingBooks

import { usePage } from "@inertiajs/react";
import BookTrendingGroup from "components/BookGroup/groups/BookTrendingGroup";
import { BookTrendingContextType } from "components/BookGroup/types";
import Container from "hardcover-ui/components/Container";
import HomeProps from "pages/clientverse/pages/home.props";

const context: BookTrendingContextType = {
  link: "/trending/recent",
  duration: "90day",
};

export default function TrendingBooks() {
  const { trendingBooks } = usePage().props;
  if (trendingBooks.length === 0) {
    return false;
  }

  return (
    "md" variant="layout" className="mt-12 overflow-hidden">
      "lg" books={trendingBooks} context={context} />
    </Container>
  );
}
Code language: TypeScript (typescript)

The usePage hook will grab that argument from the passed in props at the highest level and use them. The same can be done in our Layout page to get the users data, like when we show the current users avatar when logged in.

This covers most of the render cycle. The one missing piece is the entry point – that application.tsx script included in the layout. This script gave me some of the most headaches, so I’m going to include our exact one here in case it helps someone else.

app/javascript/entrypoints/application.tsx

// Add this polyfill to fix a warning with Redux
import 'symbol-observable';
import { createInertiaApp } from '@inertiajs/react';
import { createElement } from 'react'
import { createRoot, hydrateRoot } from 'react-dom/client'

createInertiaApp({
  resolve: (name) => {
    const pages = import.meta.glob('../pages/**/*.tsx')
    return pages[`../pages/${name}.tsx`]();
  },

  setup({ el, App, props }) {
    if(import.meta.env.VITE_SSR) {
      hydrateRoot(el as unknown as Element,  as unknown as any) 
    } else {
      const root = createRoot(el)
      root.render(createElement(App, props) as unknown as any)
    }
  },
});Code language: TypeScript (typescript)

This will make every page available and render it. In production, we set the VITE_SSR variable, which switches this to hydrate mode. Just ignore the unknown as any statement. 😂

If you’ve had success with other solutions here, I’d love to chat.

The Vite Server

Inertia works extremely well with Vite. In our case, we have a separate Vite process that runs a Vite server locally. Here’s our local docker-compose.yml file for this.

docker-compose.yml

services:
  rails:
    build:
      context: ./rails
    depends_on:
      - postgres
      - redis
      - typesense
    healthcheck:
      test: ["CMD-SHELL", "wget -qO- http://localhost:3000/up || exit 1"]
      interval: 5s
      timeout: 3s
      retries: 5
    container_name: hardcover-rails
    command: bash -c "bundle install && yarn install && rm -f tmp/pids/server.pid && bundle exec rake db:migrate && bin/rails s -p 3000 -b '0.0.0.0'"
    stdin_open: true
    tty: true
    environment:
      RAILS_PRIMARY_KEY: ${RAILS_PRIMARY_KEY}
      VITE_RUBY_HOST: "vite"
    restart: always
    ports:
      - 3000:3000
    volumes:
      - ./rails:/app

  vite:
    build:
      context: ./rails
    container_name: hardcover-vite
    command: bash -c "yarn install && bin/vite dev"
    environment:
      VITE_ENV: development
      VITE_RUBY_HOST: 0.0.0.0
    restart: always
    ports:
      - 3036:3036
    volumes:
      - ./rails:/appCode language: PHP (php)

On production, the Vite server runs alongside Rails as an accessory deployed through Kamal.

config/deploy.production.yml

# Name of the container image.
image: registry.digitalocean.com/hardcover/rails-production

# Configure builder setup.
builder:
  arch: amd64
  dockerfile: Dockerfile.production

# Deploy to these servers (production servers).
servers:
  web:
    hosts:
      - 1.2.3.1 # app-1
      - 1.2.3.2 # app-2
      - 1.2.3.3 # app-3
      - 1.2.3.4 # app-4
    cmd: ./bin/rails server -b 0.0.0.0 -p 80
    options:
      memory: 2g
  vite:
    hosts:
      - 1.2.3.1 # app-1
      - 1.2.3.2 # app-2
      - 1.2.3.3 # app-3
      - 1.2.3.4 # app-4
    cmd: bin/vite ssr
    options:
      network-alias: vite_ssr
      memory: 1g
  worker:
    hosts:
      - 4.3.2.1 # production-worker-1
    cmd: bundle exec sidekiq -e production -C config/sidekiq.yml
    proxy: false
    options:
      memory: 4g
    env:
      clear:
        RUN_MIGRATIONS: true
        SIDEKIQ_CONCURRENCY: 50
  workers:
    hosts:
      - 4.3.2.2 # production-worker-2
    cmd: bundle exec sidekiq -e production -C config/sidekiq.yml
    proxy: false
    options:
      memory: 4g
    env:
      clear:
        RUN_MIGRATIONS: false
        SIDEKIQ_CONCURRENCY: 25

proxy: 
  ssl: false
  app_port: 80
  healthcheck:
    path: /up
    interval: 3
    timeout: 120

# Environment variables specific to production.
env:
  clear:
    RAILS_MAX_THREADS: 6
    RAILS_ENV: production
    PORT: 80
    VITE_RUBY_HOST: vite_ssr
    VITE_ENV: production
    VITE_SSR: true
  secret:
    - RAILS_PRIMARY_KEYCode language: PHP (php)

Combined that with a production Docker image that runs VITE_SSR=”true” ./bin/vite build --ssr on deploy, and we only need to generate the main assets once.

This setup allows the Rails server to talk to the Vite server to get what it needs. Sometimes that’s generating a page using SSR, other could mean just providing the CSS and JS needed to run the page.

Similar to any Rails Application, we set an asset host which all JS and CSS goes through ( config.asset_host = "https://static.hardcover.app" ). This host is cached at the CloudFlare level with long-expiration dates.

You might also notice the RUN_MIGRATIONS part there. That’s used within our bin/docker-entrypoint script to determine which server runs migrations – which only needs to happen on one server.

bin/docker-entrypoint

#!/bin/bash -e

# Enable jemalloc for reduced memory usage and latency
if [ -z "${LD_PRELOAD+x}" ]; then
    LD_PRELOAD=$(find /usr/lib -name libjemalloc.so.2 -print -quit)
    export LD_PRELOAD
fi

# Only run database migrations on a specific server
if [ "$RUN_MIGRATIONS" = "true" ]; then
  echo "Preparing database..."
  ./bin/rails db:prepare
else
  echo "Skipping database preparation on this server."
fi

exec "$@"
Code language: PHP (php)

Lastly, we have our Dockerfile that connects everything.

config/Dockerfile.production

# syntax=docker/dockerfile:1
# check=error=true

# This Dockerfile is designed for production, not development. Use with Kamal or build'n'run by hand:
# docker build -t hardcover .
# docker run -d -p 80:80 -e RAILS_PRIMARY_KEY= --name hardcover-rails-production hardcover-rails-production

# For a containerized dev environment, see Dev Containers: https://guides.rubyonrails.org/getting_started_with_devcontainer.html

# Make sure RUBY_VERSION matches the Ruby version in .ruby-version
ARG RUBY_VERSION=3.3.5
FROM docker.io/library/ruby:$RUBY_VERSION AS base

# Rails app lives here
WORKDIR /rails

# Install base packages
RUN apt-get update -qq && \
    apt-get install --no-install-recommends -y curl libjemalloc2 libvips sqlite3 libpq-dev && \
    rm -rf /var/lib/apt/lists /var/cache/apt/archives

# Set production environment
ENV RAILS_ENV="production" \
    BUNDLE_DEPLOYMENT="1" \
    BUNDLE_PATH="/usr/local/bundle" \
    BUNDLE_WITHOUT="development:test"

    # Install JavaScript dependencies
ARG NODE_VERSION=22.11.0
ARG YARN_VERSION=1.22.22
ENV PATH=/usr/local/node/bin:$PATH
RUN curl -sL https://github.com/nodenv/node-build/archive/master.tar.gz | tar xz -C /tmp/ && \
    /tmp/node-build-master/bin/node-build "${NODE_VERSION}" /usr/local/node && \
    npm install -g yarn@$YARN_VERSION && \
    rm -rf /tmp/node-build-master

# Throw-away build stage to reduce size of final image
FROM base AS build

# Install packages needed to build gems
RUN apt-get update -qq && \
    apt-get install --no-install-recommends -y build-essential git libpq-dev node-gyp pkg-config && \
    rm -rf /var/lib/apt/lists /var/cache/apt/archives

# Install application gems
COPY Gemfile Gemfile.lock ./
COPY gems/ ./gems
RUN bundle install && \
    rm -rf ~/.bundle/ "${BUNDLE_PATH}"/ruby/*/cache "${BUNDLE_PATH}"/ruby/*/bundler/gems/*/.git && \
    bundle exec bootsnap precompile --gemfile


# Install node modules // [!code ++]
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile

# Copy application code
COPY . .

# Precompile bootsnap code for faster boot times
RUN bundle exec bootsnap precompile app/ lib/

# Precompiling assets for production without requiring secret RAILS_PRIMARY_KEY
RUN SECRET_KEY_BASE_DUMMY=1 VITE_RUBY_HOST="vite_ssr" VITE_ASSET_URL="https://storage.googleapis.com/hardcover" VITE_BUGSNAG_KEY="2667973d1eae42fd4fa3049d0abc7274" VITE_CDN_URL="https://assets.hardcover.app" VITE_ENV="production" VITE_GRAPHQL_URL="https://api.hardcover.app/v1/graphql" VITE_RESIZE_URL="https://img.hardcover.app" VITE_SSR="true" VITE_TYPESENSE_KEY="7JRcb63AvYIo2WJvE3IzH4f8j1z9fHcC" VITE_TYPESENSE_URL="https://search.hardcover.app" ./bin/rails assets:precompile
RUN SECRET_KEY_BASE_DUMMY=1 VITE_RUBY_HOST="vite_ssr" VITE_ASSET_URL="https://storage.googleapis.com/hardcover" VITE_BUGSNAG_KEY="2667973d1eae42fd4fa3049d0abc7274" VITE_CDN_URL="https://assets.hardcover.app" VITE_ENV="production" VITE_GRAPHQL_URL="https://api.hardcover.app/v1/graphql" VITE_RESIZE_URL="https://img.hardcover.app" VITE_SSR="true" VITE_TYPESENSE_KEY="7JRcb63AvYIo2WJvE3IzH4f8j1z9fHcC" VITE_TYPESENSE_URL="https://search.hardcover.app" ./bin/vite build --ssr

# Disable deleting node modules to test puppeteer
# RUN rm -rf node_modules

# Final stage for app image
FROM base

# Copy built artifacts: gems, application
COPY --from=build "${BUNDLE_PATH}" "${BUNDLE_PATH}"
COPY --from=build /rails /rails
COPY public/robots.production.txt public/robots.txt

# Run and own only the runtime files as a non-root user for security
RUN groupadd --system --gid 1000 rails && \
    useradd rails --uid 1000 --gid 1000 --create-home --shell /bin/bash && \
    chown -R rails:rails db log storage tmp
USER 1000:1000

# Entrypoint prepares the database.
ENTRYPOINT ["/rails/bin/docker-entrypoint"]

# Start server
EXPOSE 80
Code language: PHP (php)

I don’t love this, but it works. We have a staging one that’s pretty similar. I suspect there would be better ways to send over the VITE public variables. I compile the docker image and upload it as part of a Kamal release using the command:

kamal deploy build --push -d production

I switched to to a Makefile command and added deployment to it.

Makefile

# .PHONY: deploy

deploy:
	kamal deploy build --push -d production && kamal deploy -d production
deploy-staging:
	kamal deploy build --push -d staging && kamal deploy -d staging
Code language: CSS (css)

With this, I can run make or make deploy to deploy the app to production, or make deploy-staging to send my local workspace there.

Since this upload happens from my computer rather than a post commit hook, it does need to be run locally. Eventually I’ll move this to a GitHub action.

Still Room For Improvement

This setup isn’t perfect. There are a few parts that still are a little rough around the edges.

As I mentioned earlier, I haven’t successfully gotten shared layouts to work. That does mean each request will re-rendering the entire page – not just the subparts relevant to the current route. That was OK for us, since most route changes are an entire page update. It does mean the header/footer get re-rendered. This happens client-side, so it’s not a full page reload, just a content reload. It’s more of a React component re-render than a full reload.

SSR mode has been tough to debug. I haven’t found a way to easily get it running without doing a full compile, setting a bunch of variables and reproducing production. This makes debugging SSR hydration errors on productions tricky.

There’s limited documentation for Inertia.js and especially Ruby on Rails with Inertia. The Inertia Rails gem docs are AMAZING and did handle just about all of my questions. Sometimes it’s tricky to understand if a problem is in Rails, Inertia-Rails, Inertia.js, React.js or Vite. The Inertia.js Discord has been GREAT. Each time I’ve asked a question I’ve had an answer within minutes (which were always my problem, not a framework issue).

Switching from using Promises to control Suspense layers to InertiaRails.optional with import { Deferred } from '@inertiajs/react' has felt a bit weird. It’s effectively the same, but it’s the React Way. I guess I’m already abandoning the Ruby Way, so I can’t be dogmatic about the React Way either. 😅

Railsy React

What I love about this setup is that I’m able to generate everything using familiar Ruby and Rails tools, and then use React.js for the entire front-end. There’s a lot I didn’t have to do. There’s no React Router, since it uses the Ruby on Rails Router. In our case, we just needed to change from using next/link to import { Link } from '@inertiajs/react' and it just worked.

I’m excited to use the InertiaRails.optional feature more. For example, on a Book Page we could send down all information about the book immediately (from a cache), then generate everything user-specific and send that down later. This is the Inertia.js equivalent of a streaming response. It’s managed by JS, so it wouldn’t quite by Streaming SSR (which is handled by the server), but it’s close. If you’re only using it for non-SEO data (as we are) then it effectively the same.

One thing we’re not doing is hitting our Rails API directly from the front-end (except in a few Devise-related cases). We have a Hasura GraphQL API that handles most requests. This means we’re not leveraging many other wonderful Inertia.js options – forms, flashes, file uploads and many other things that begin with F.

The architecture we’re building to. We’re here now aside from the 2nd API server and Follower database.

This is where we’re headed to from an infrastructure level. We’re not quite there yet, but we’re close!

How’d This Change Impact Hardcover?

We deployed this migration from Next.js to Rails on March 18, 2025. I’d already setup all the servers, making for an easier migration than I expected. We’ve had a bunch of bugs I’ve been working on, but that’ll be fixed in time.

Almost immediately Google stated showing Hardcover to more readers. That was a sudden, and welcomed surprise.

That was likely because of our increased Google Pagespeed score.

This was considerably faster than Next.js (I should’ve taken a screenshot before!). The Total Blocking Time was usually over one second – which was one part we couldn’t seem to improve. When we moved from Vercel to Google Cloud Run that Total Blocking time even went up a bit – likely because of Vercel’s distributed cloud (I think?). Seeing this go down has been amazing. 🤯

We’re still figuring out how these changes impact readers. As we fix bugs, the site is becoming more stable and just fun to use. Over the last few days we’ve seen visit duration spike to almost 6 minutes – up from 3 minutes. It’s too soon to see if this is a long-term trend, but it’s nice to see this move in that direction.

The number of readers signing up for Hardcover has been stable throughout this migration – which is good news. Considering traffic has been similar, I’d expect signups to be similar as well.

Next steps will be fixing more bugs, cleaning up a few slow pages, and a lot more marketing so more readers can find us. If you can share Hardcover or this post, that would be a huge help. 📚

Next Article

In the next article in this series, I’ll talk about Moving from the Cloud to the Server – Google & AWS to Digital Ocean (affiliate link) and Kamal.

If you’re using Inertia.js or Ruby on Rails and would be interested in contributing to Hardcover, you should join our Discord! We’re preparing to open source, and looking for some developers to be part of shaping how we collaborate with volunteers and the community.

← More from the blog

Related Posts

Leave a Comment