Blogmarks
Software lessons from Factorio
https://www.linkedin.com/posts/hillel-wayne_factorio-activity-7282805593428402176-xB8j/"Scalability and efficiency are fundamentally at odds. Software that maximizes the value of the provided resources will be much harder to scale up, and software that scales up well will necessarily be wasteful."
Good, Fast, Cheap: Pick 3 or Get None
https://loup-vaillant.fr/articles/good-fast-cheap"To get closer to the simplest solution, John Ousterhout recommends, you should design it twice. Try a couple different solutions, see which is simplest. Which by the way may help you think of another, even simpler solution."
Moving on from React, a year later
https://kellysutton.com/2025/01/18/moving-on-from-react-a-year-later.html"One of the many ways this matters is through testing. Since switching away from React, I’ve noticed that much more of our application becomes reliably-testable. Our Capybara-powered system specs provide excellent integration coverage."
"When we view the lines of code as a liability, we arrive at the following operating model: What is the least amount of code we can write and maintain to deliver value to customers?"
Not all lines of code are equal, some cost more than others to write and to maintain ("carrying cost"). Some have a higher regression risk over time than others.
"When thinking about the carrying cost of different lines of code, maintaining different levels of robust tests reduces the maintenance fees I must pay. So, increasing my more-difficult-to-test lines of code is more expensive than increasing my easier-to-test lines of code."
Language, in as much as it relates to testability, is the metric of focus here. What other facets of code increase or decrease their "carrying cost"?
Complexity Has to Live Somewhere
https://ferd.ca/complexity-has-to-live-somewhere.html"Complexity has to live somewhere. If you are lucky, it lives in well-defined places... You give it a place without trying to hide all of it. You create ways to manage it. You know where to go to meet it when you need it."
It is useful for complexity to be abstracted away when it would otherwise detract from or complicate the task at hand. Then there are times where you need to interact with the complexity directly. These tasks will be best served by having a well-defined place where you know you can meet the complexity.
Regarding abstractions, I once heard something along the lines of, "a good abstraction is one that allows you to safely make assumptions about how something will work." In other words, with a good abstraction you don't have to reconfirm a litany of details, but can, for standard scenarios, make reasonable assumptions that save you time and mental overhead.
Putting it all together, good abstractions allow for beneficial assumptions, but when those assumptions aren't going to hold up, we ought to have a well-defined place to go wrangle with the complexity.
I came across this article while reading The Essence of Successful Abstractions — Sympolymathesy, by Chris Krycho.
Speed matters: Why working quickly is more important than it seems
https://jsomers.net/blog/speed-mattersThis blog post is the best thing I've read "in defense of getting good at Vim". Sure, the learning curve is high and it can require a lot of configuration and memorization, but that is all in exchange for shrinking the time between thought and executing it on the computer.
The obvious benefit to working quickly is that you'll finish more stuff per unit time. But there's more to it than that. If you work quickly, the cost of doing something new will seem lower in your mind. So you'll be inclined to do more.
The general rule seems to be: systems which eat items quickly are fed more items. Slow systems starve.
When you're fast, you can quickly play with new ideas.
Part of the activation energy required to start any task comes from the picture you get in your head when you imagine doing it.
Best place to learn to use PostgreSQL
https://www.reddit.com/r/PostgreSQL/comments/1i84wtv/best_place_to_learn_to_use_postgresql/A summary of the resources mentioned:
The Cost of Going it Alone
https://blogs.gnome.org/bolsh/2011/09/01/the-cost-of-going-it-alone/A couple historical lessons learned about using, building on, and contributing to free (open source) software.
- The longer your changes are off the main branch, the harder and more expensive they are going to be to integrate. This is true of a feature branch on your own software project and of "out of tree" changes to a large open-source project like Linux.
- If a major dependency of your business is an open-source software project (e.g. Postgres, Ruby, Linux, etc.), you should probably be employing a contributor who has a strong relationship with that project. E.g. the various companies that have employed Aaron Patterson to work on Ruby.
Regarding the second bullet point, another common pattern now days is for open-source maintainers of important (but much smaller than, say, Linux-size) software projects to crowd-fund their work via GitHub sponsors from companies and individuals.
Shared by Jeremy Schneider on Linkedin.
Tampopo | IMDB
https://www.imdb.com/title/tt0092048/Recommended to me as a good, post-Fordism Japanese movie
You Probably Don't Need Query Builders
https://mattrighetti.com/2025/01/20/you-dont-need-sql-buildersthe tl;dr of this article is that you can avoid a bunch of ORM/query building and extraneous app logic by leaning on the expressiveness and capability of SQL.
The query building example in this post is a good illustration of why where 1 = 1
shows up in some SQL queries, usually in the logs from an ORM.
Interesting: one example uses Postgres' cardinality(some_array) = 0
to check if an array is empty or not. For one-dimensional array, cardinality
is a bit more straightforward than array_length
and requires only one argument.
Of further note, cardinality
determines the number of items in an array regardless of how many dimensions an array is.
> select cardinality(array[1,2]);
+-------------+
| cardinality |
|-------------|
| 2 |
+-------------+
> select cardinality(array[[1,2], [3,4]]);
+-------------+
| cardinality |
|-------------|
| 4 |
+-------------+
> select cardinality(array[[[1,2,3], [4,5,6], [7,8,9]]]);
+-------------+
| cardinality |
|-------------|
| 9 |
+-------------+
Rails Controller Testing: `assigns()` and `assert_template()` removed in Rails 5
https://github.com/rails/rails/issues/18950Issue: Deprecate assigns() and assert_template in controller testing · Issue #18950 · rails/rails · GitHub
Testing what instance variables are set by your controller is a bad idea. That's grossly overstepping the boundaries of what the test should know about. You can test what cookies are set, what HTTP code is returned, how the view looks, or what mutations happened to the DB, but testing the innards of the controller is just not a good idea.
If you still want to be able to do this kind of thing in your controller or request specs, you can add the functionality back with rails-controller-testing
.
Email Regexp is 23k
https://code.iamcal.com/php/rfc822/full_regexp.txtI prefer something dumber like /\S+@\S+\.\S+/
, but I guess someone has to be thorough.
Shared by Sam Rose