Ruby on Rails (19 blogmarks)

← Blogmarks

How to Audit a Rails Codebase: Legacy App Playbook

https://piechowski.io/post/how-i-audit-a-legacy-rails-codebase/

There is a lot more going on in this article, but the line that stood out is this -- perhaps because I've been discussing "shipping" with colleagues quite a bit recently.

Deploy frequency is a proxy for codebase health — teams with fragile apps stop shipping.

When you don't feel like you can deploy at certain times or on certain days, that is a big signal that there is a gap in trust somewhere in the socio-technical tapestry of your engineering org. Maybe you don't trust something about your deploy infrastructure or your test suite or even some aspect of your team.

real-world-rails: 200+ production open source Rails apps & engines in one repo

https://github.com/steveclarke/real-world-rails

This is an awesome resource. I've typically had a handful of open-source Rails apps in the back of my mind that I sometimes remember to go check out, but I've never seen this many in one place. What a great place to learn from, observe patterns, and borrow ideas.

I heard about this from this tweet:

I have 200+ production Rails codebases on my local disk. Discourse, GitLab, Mastodon, and a ton of others — all as git submodules in one repo. I've been referencing it for years.

For most of that time it meant a lot of manual grepping and reading file after file. Valuable but tedious. You had to be really motivated to sit there and read through that much source code.

This past year, with agentic coding, everything changed. Now I just ask questions and the agent searches all 200+ apps for me. "What are the different approaches to PDF generation?" "Compare background job patterns across these codebases." What used to take hours of reading takes a single prompt.

The original repo hadn't been updated in two years and I was using it enough that I figured I should fork it and bring it forward. So I did:

  • Updated all 200+ submodules to latest
  • Added Gumroad, @dhh's Upright, Fizzy, and Campfire
  • Stripped out old Ruby tooling (agents do this better now)
  • Added an installable agent skill
  • Weekly automated updates

If you're building with Rails, clone this and point your agent at it. If you know of apps that should be in here, open an issue or PR.

< github link >

PS: Hat tip to Eliot Sykes for the original repo.

Rails’ Partial Features You (Didn’t) Know

https://railsdesigner.com/rails-partial-features/

I'm a big fan of the way this article started with the basics and then layered in feature after feature in an approachable way, with examples that made me feel like I could apply these tips directly to my current codebase.

Though I knew most of this, it made for a useful refresher. I also liked how it connected the verbose versions (what I usually write) with the minimal, full-convention versions (e.g. rendering a partial collection with <%= render @users %>).

Both the layout and space_template options for a partial collection were new to me. TIL!

h/t Garrett Dimon

ActiveSupport::CurrentAttributes

https://api.rubyonrails.org/classes/ActiveSupport/CurrentAttributes.html

This can be used to house request-specific attributes needed across your application. A prime example of this is Current.user as well as authorization details like Current.role or Current.abilities.

A word of caution: It’s easy to overdo a global singleton like Current and tangle your model as a result. Current should only be used for a few, top-level globals, like account, user, and request details. The attributes stuck in Current should be used by more or less all actions on all requests. If you start sticking controller-specific attributes in there, you’re going to create a mess.

Making Tables Work with Turbo

https://www.guillermoaguirre.dev/articles/making-tables-work-with-turbo

This article shows how to work around some gotchas that come up when using Turbo with HTML tables. The issue is that despite how you structure your HTML, some elements like <form> get shifted around or de-nested by the HTML rendering engine.

Some of the key takeaways:
- take advantage of the dom_id helper
- stream to an ID on a standard tag without necessarily having added a turbo_frame_tag
- use of remote form submission by specifying the form ID to the submit button's form attribute

Instantiate a custom Rails FormBuilder without using form_with

https://justin.searls.co/posts/instantiate-a-custom-rails-formbuilder-without-using-form_with/

I was working with a partial that I wanted to stream to the page with Hotwire. This partial was rendering a form element that would be streamed inline to a form already on the page. The form input helper needs a Rails FormBuilder object in order to be instantiated properly. In the Rails controller where I'm streaming the partial as the response, I don't have access to the relevant form object. So, the rendering errors.

Then I found this article from Justin Searls describing a FauxFormObject which can help get around this issue. I made it available as a helper, referenced the instance in the partial, made sure the form element name followed convention, and then all was working.

GitHub - thoughtbot/hotwire-example-template: A collection of branches that transmit HTML over the wire.

https://github.com/thoughtbot/hotwire-example-template

This is a GitHub repo I've seen recommended by Go Rails for learning and experimenting with different Hotwire features and concepts. Each branch is an example with working code to go through.

I learned about this repo from Go Rails video or one of the nearby ones.

The most underrated Rails helper: dom_id

https://boringrails.com/articles/rails-dom-id-the-most-underrated-helper/

The dom_id helper allows you to create reproducible, but reasonably unique identifiers for DOM elements which is useful for things like anchor tags, turbo frames, and rendering from turbo streams.

elvinaspredkelis/signed_params: A lightweight library for encoding/decoding Rails request parameters

https://github.com/elvinaspredkelis/signed_params

This is a clever idea for a small gem — allow request query parameters to be encoded and decoded automatically in the controller to avoid exposing certain kinds of information. This uses Rails’ Message Verifier module.

I learned about this from Kasper Timm Hansen.

Why You Need Strong Parameters in Rails

https://www.writesoftwarewell.com/why-use-strong-parameters-in-rails/

This includes an interesting bit of history about a GitHub hack that inspired the need for strong parameters in Rails.

If you want to see a real-world example, in 2012 GitHub was compromised by this vulnerability. A GitHub user used mass assignment that gave him administrator privileges to none other than the Ruby on Rails project.

The article goes on to demonstrate the basics of using strong params. It even shows off a new-to-me expect method that was added in Rails 8 that is more ergonomic than the require/permit syntax.

# require/permit
user_params = params.require(:user).permit(:name, :location)

# expect
user_params = params.expect(user: [:name, :location])

Calling private methods without losing sleep at night

https://justin.searls.co/posts/calling-private-methods-without-losing-sleep-at-night/

A little thing I tend to do whenever I make a dangerous assumption is to find a way to pull forward the risk of that assumption being violated as early as possible.

Tests are one way we do this, but tests aren’t well-suited to all the kinds of assumptions we make about our software systems.

We assume our software doesn’t have critical vulnerabilities, but we have a pre-deploy CI check (via brakeman) that alerts us when that assumption is violated and CVEs do exist.

Or as Justin describes in this post, we can have some invariants in our Rails initializer code to draw our attention to other kinds of assumptions.

The secret to perfectly calculate Rails database connection pool size

https://island94.org/2024/09/secret-to-rails-database-connection-pool-size

tl;dr: don't compute the pool size, set it to a big number, the pool size is the max that Rails enforces, but the thing that matters is the number of connections available at the database (which is a separate issue).

If, rather, you're running out of connections at the database, then try things like:
- reduce the number of Puma threads
- reduce background job threads (e.g. via GoodJob, Solid Queue, etc.)
- "Configure anything else using a background thread making database queries"
- among others

Or increase the number of connections available at the database with a tool like PgBouncer.

This post was written by the person who created GoodJob.

Data migrations with the `maintenance_tasks` gem

https://railsatscale.com/2023-01-04-how-we-scaled-maintenance-tasks-to-shopify-s-core-monolith/article.html

The maintenance_tasks gem from Shopify is a mountable Rails engine for running one-off data migrations from a UI that is separate from the schema migration lifecycle.

In the past, I've used the after_party gem for this use case. That gem runs data migrations tasks typically as part of a post-deploy process with the same up/down distinction as schema migrations.

It seems the big difference with maintenance_tasks is that they are managed from a UI and that there are many more features, such as batch, pausing, rerunning, etc. You can observe the progress of these tasks from the UI as well.

There is a Go Rails Episode about how to use the maintenance_tasks gem.

Built-in Rails Database Rake Tasks

https://github.com/rails/rails/blob/1dd82aba340e8a86799bd97fe5ff2644c6972f9f/activerecord/lib/active_record/railties/databases.rake

It's cool to read through the internals of different rake tasks that are available for interacting with a Rails database and database migrations.

For instance, you can see how db:migrate works:

  desc "Migrate the database (options: VERSION=x, VERBOSE=false, SCOPE=blog)."
  task migrate: :load_config do
    ActiveRecord::Tasks::DatabaseTasks.migrate_all
    db_namespace["_dump"].invoke
  end

First, it attempts to run all your migrations. Then it invokes _dump which is an internal task for re-generating your schema.rb (or structure.sql) based on the latest DB schema changes.

Rails Database Migrations Best Practices

https://www.fastruby.io/blog/db-migrations-best-practices.html

Meant to be deleted

I love this idea for a custom rake task (rails db:migrate:archive) to occasionally archive past migration files.

# lib/tasks/migration_archive.rake
namespace :db do
  namespace :migrate do
    desc 'Archives old DB migration files'
    task :archive do
      sh 'mkdir -p db/migrate/archive'
      sh 'mv db/migrate/*.rb db/migrate/archive'
    end
  end
end

That way you still have access to them as development artifacts. Meanwhile you remove the migration clutter and communicate a reliance on the schema file for standing up fresh database instances (in dev/test/staging).

Data migrations

They don't go into much detail about data migrations. It's hard to prescribe a one-size-fits-all because sometimes the easiest thing to do is embed a bit of data manipulation in a standard schema migration, sometimes you want to manually run a SQL file against each database, or maybe you want to set up a process for these changes with a tool like the after_party gem.

Reversible migrations

For standard migrations, it is great to rely on the change method to ensure migrations are reversible. It's important to recognize what kinds of migrations are and aren't reversible. Sometimes we need to write some raw SQL and for that we are going to want up and down methods.

Rails Controller Testing: `assigns()` and `assert_template()` removed in Rails 5

https://github.com/rails/rails/issues/18950

Issue: Deprecate assigns() and assert_template in controller testing · Issue #18950 · rails/rails · GitHub

Testing what instance variables are set by your controller is a bad idea. That's grossly overstepping the boundaries of what the test should know about. You can test what cookies are set, what HTTP code is returned, how the view looks, or what mutations happened to the DB, but testing the innards of the controller is just not a good idea.

If you still want to be able to do this kind of thing in your controller or request specs, you can add the functionality back with rails-controller-testing.

Moving on from React, a year later

https://kellysutton.com/2025/01/18/moving-on-from-react-a-year-later.html

"One of the many ways this matters is through testing. Since switching away from React, I’ve noticed that much more of our application becomes reliably-testable. Our Capybara-powered system specs provide excellent integration coverage."

"When we view the lines of code as a liability, we arrive at the following operating model: What is the least amount of code we can write and maintain to deliver value to customers?"

Not all lines of code are equal, some cost more than others to write and to maintain ("carrying cost"). Some have a higher regression risk over time than others.

"When thinking about the carrying cost of different lines of code, maintaining different levels of robust tests reduces the maintenance fees I must pay. So, increasing my more-difficult-to-test lines of code is more expensive than increasing my easier-to-test lines of code."

Language, in as much as it relates to testability, is the metric of focus here. What other facets of code increase or decrease their "carrying cost"?