Blogmarks
Random PostgreSQL Scripts
https://github.com/lesovsky/uber-scripts- uber-scripts - https://github.com/lesovsky/uber-scripts
- postgres_dba - https://github.com/NikolayS/postgres_dba
- HandySQL - https://github.com/davestokes/HandySQL
Wait a minute! — PostgreSQL extension pg_wait_sampling
https://andyatkinson.com/blog/2024/07/23/postgresql-extension-pg_wait_samplingThe pg_wait_sampling extension is a handy companion to pg_stat_statements and pg_locks, providing historical samplings of wait events. This helps with tracking down queries that block one and other causing contention in your database.
Things That Aren’t Doing The Thing
https://strangestloop.io/essays/things-that-arent-doing-the-thingThe only thing that is doing the thing is doing the thing.
Horseless intelligence
https://nedbatchelder.com/blog/202503/horseless_intelligence.htmlMy advice about using AI is simple: use AI as an assistant, not an expert, and use it judiciously. Some people will object, “but AI can be wrong!” Yes, and so can the internet in general, but no one now recommends avoiding online resources because they can be wrong. They recommend taking it all with a grain of salt and being careful. That’s what you should do with AI help as well.
Skeptics of the uses of LLMs typically point to strawman arguments and gotchas to wholesale discredit LLMs*. They're either clinging too tightly to their bias against these tools or completely missing the point. These tools are immensely useful. They aren't magic boxes though, despite what the hypemen might want you to believe. If you take a little extra effort to use them well, they are a versatile swiss army knife to add to your software tool belt."Use [these LLMs] as an assistant, not an expert."
*There are soooo many criticisms and concerns we can and should raise about LLMs. Let's have those conversations. But if all you're doing is saying, "Look, I asked it an obvious question that I know the answer to and it got it WRONG! Ha, see, this AI stuff is a joke." The only thing you're adding to the dialogue is a concern for your critical thinking skills.
If you approach AI thinking that it will hallucinate and be wrong, and then discard it as soon as it does, you are falling victim to confirmation bias. Yes, AI will be wrong sometimes. That doesn’t mean it is useless. It means you have to use it carefully.
Use it carefully, and use it for what it is good at.
For instance, it can scaffold a 98% solution to a bash script that does something really useful for me. Something that I might have taken an hour to write myself or something I would have spent just as much time doing manually.
Another instance, I'm about to write a couple lines of code to do X. I've done something like X before, I have an idea of how to approach it. A habit I've developed that enriches my programming life is to prompt Claude or Cursor for a couple approaches to X. I see how those compare to what I was about to do.
There are typically a few valuable things I get from doing this:
- The effort of putting thoughts into words to make the prompt clarifies my thinking about the problem itself. Maybe I notice an aspect of it I hadn't before. Maybe I have a few sentences that I can repurpose as part of a PR description later.
- The LLM suggests approaches I hadn't considered. Maybe it suggests a command, function, or language feature I don't know much about. I go look those things up and learn something I wouldn't have otherwise encountered.
- The LLM often draws my attention to edge cases and other considerations I hadn't thought of when initially thinking through a solution. This leads me to develop more complete and robust solutions.
I’ve used AI to help me write code when I didn’t know how to get started because it needed more research than I could afford at the moment. The AI didn’t produce finished code, but it got me going in the right direction, and iterating with it got me to working code.
Sometimes you're staring at a blank page. The LLM can be the first one to toss out an idea to get things moving. It can get you to that first prototype that you throw away once you've wrapped your head around the problem space.
Your workflow probably has steps where AI can help you. It’s not a magic bullet, it’s a tool that you have to learn how to use.
This reiterates points I made above. Approach LLMs with an open and critical mind, give it a chance to see where they can fit into your workflow.
Git Pulled What???
https://frankwiles.com/posts/two-handy-git-aliases/Usually when I'm doing a git command that involves a ref of the form @{1}
, it is because I'm targeting a specific entry in the reflog that I want to restore my state to.
Frank points out another use for these right after pulling down the latest changes from a remote branch.
This will display all the commits that were pulled in from the latest pull (e.g. git pull --rebase
):
$ git log @{1}..
And this will show a diff of all the changes between where you just were and what was just pulled in:
$ git diff @{1}..
One thing to keep in mind is to be sure that you are using @{1}
immediately after you pull. Any other things you might do, such as changing branches, will put another entry on the reflog making @{1}
no longer reference your pre-pull state.
In that case, you'd need to do git reflog
and find what entry does correspond to right before you pulled.
There is no Vibe Engineering
https://serce.me/posts/2025-31-03-there-is-no-vibe-engineeringSoftware engineering is programming integrated over time. The integrated over time part is crucial. It highlights that software engineering isn't simply writing a functioning program but building a system that successfully serves the needs, can scale to the demand, and is able to evolve over its complete lifespan.
LLMs generate point-in-time code artifacts. This is only a small part of engineering software systems over time.
Vibe Coding as a practice is here to stay. It works, and it solves real-world problems – getting you from zero to a working prototype in hours. Yet, at the moment, it isn’t suitable for building production-grade software.
List of all ShellCheck Rules
https://www.shellcheck.net/wiki/This is the root wiki page for ShellCheck which lists all of the rules that it enforces when checking your scripts.
Each rule links to a page that describes the issue, shows you an example of the problematic code, and a corrected version.
E.g. SC1003 Want to escape a single quote? echo 'This is how it'\''s done.'
shows:
Problematic code:
echo 'this is not how it\'s done'
Corrected code:
echo 'this is how it'\''s done'
Programming is about mental stack management
https://justin.searls.co/posts/programming-is-about-mental-stack-management/Humans, like LLMs, have a context window too.
Fun fact: humans are basically the same. Harder problems demand higher mental capacity. When you can't hold everything in your head at once, you can't consider every what-if and, in turn, won't be able to preempt would-be issues. Multi-faceted tasks also require clear focus to doggedly pursue the problem that needs to be solved, as distractions will deplete one's cognitive ability.
What usually happens when we get too much context for the problem at hand is that we start to lose track of details, we get distracted, maybe even irritable. The task and the time it takes to do it balloon. We might even forgot why we start down this path in the first place.
As I was reading Justin’s example of the mental task stack he ended up down, I thought it sounded farfetched. “No one is going to do all these unnecessary things when they are just trying to add a route.” But then I remembered ALL the times that I get six steps removed from what I set out to do because of a series of unexpected mishaps and several “I can’t help myself”s.
Chesterton's Fence: Understanding past decisions
https://thoughtbot.com/blog/chestertons-fenceThere exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
I’ve often seen this concept used to talk about rashly throwing things away in a software development context. Anything from a misunderstood conditional check to an entire piece of infrastructure. “This doesn’t make sense, let’s refactor it.” Make it make sense first!
This is part of why I’m very cautious when it comes to refactoring legacy code. I’d much rather avoid the temptation of a few cathartic refactors while fixing some bug and avoid the pain of later finding out I broke a narrow use case.
It’s just as applicable to any setting/field where a (over-)confident person sees a policy, practice, or even something physical like a fence and is ready to get rid of it without understanding why it is there.
Hermitage (GitHub) — Test Suite for DB Isolation Levels
https://github.com/ept/hermitageIsolation looks a little different in every database system. This is a test suite for many popular databases to demonstrate what the mean by each of their isolation levels.
Isolation is the I in ACID, and it describes how a database protects an application from concurrency problems (race conditions). If you read a traditional database theory textbook, it will tell you that isolation is supposed to mean serializability, i.e. you can pretend that transactions are executed one after another, and concurrency problems do not happen. However, if you look at the implementations of isolation in practice, you see that serializability is rarely used, and some popular databases (such as Oracle) don't even implement it.
Conway's Law
https://martinfowler.com/bliki/ConwaysLaw.htmlAny organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.
Or put another way as I’ve also seen this concept described:
Organizations design systems that mirror their own communication structures.
Fowler gives a concise example of what this can look like.
if a single team writes a compiler, it will be a one-pass compiler, but if the team is divided into two, then it will be a two-pass compiler.
It’s not that there is a wrong or right way for an organization to communicate, but rather that an organization should ensure there isn’t dissonance between communication structures and system design.
We often see how inattention to the law can twist system architectures. If an architecture is designed at odds with the development organization's structure, then tensions appear in the software structure. Module interactions that were designed to be straightforward become complicated, because the teams responsible for them don't work together well.
I was reminded of Conway’s Law because it was mentioned in Pierre’s new landing page.
Nuclear Daiquiri 🍹
https://www.reddit.com/r/cocktails/s/DqkqxTJnmgA riff on the classic daiquiri that incorporates Falernum and Green Chartreuse. I’ve made this at home once but we subbed (Faccia Brutto) Centerbe for the Green Chartreuse.
The chartreuse gives it a green hue hence the nuclear part.
- 1oz Wray and Nephew Overproof Rum
- 1oz lime juice
- .75oz green chartreuse
- .25oz falernum
Shake with ice, double strain into a coup.
Semantic Diffusion
https://martinfowler.com/bliki/SemanticDiffusion.htmlI learned this term just now from Simon Willison who quoted the following from Martin Fowler:
Semantic diffusion occurs when you have a word that is coined by a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition. This weakening risks losing the definition entirely - and with it any usefulness to the term.
While Simon is lamenting the diffusion of the meaning of Vibe Coding, Martin, back in 2006, was seeing this happen with terms like Agile and Web2.0.
Semantic diffusion is essentially a succession of the telephone game where a different group of people to the originators of a term start talking about it without being careful about following the original definition. These people are listened to by a further group which then goes on to add their own distortions. After a few of these hand-offs it's easy to lose a lot of the key meaning of the term unless you make the point of going back to the originators. It's ironic that it's popular terms that tend to suffer from this the most. That's inevitable, of course, since unpopular terms have less people to create the telephone chains.
Hope is not lost though.
So terms do recover their semantic integrity and the current diffusion doesn't inevitably mean the terms will lose their meaning… A final comforting thought is that once the equally inevitable backlash comes we get a refocusing on the original meaning.
So once the hype dies down, the broader understanding of vibe coding may settle back on the kind of coding:
where you fully give in to the vibes, embrace exponentials, and forget that the code even exists… It's not too bad for throwaway weekend projects.
Next.js vs TanStack
https://www.kylegill.com/essays/next-vs-tanstack/I’ve only heard good things about TanStack (Start) and have been wanting to try it. This post may be the encouragement I need.
With Next.js, a combination of the push for the App router and the move to RSC shattered the original simplicity of React.
The app router is riddled with footguns and new APIs, unrelated to React, but sometimes blurring the line with it. It’s hard to know when Next.js begins, and React ends.
Writing Beyond the Academy
https://www.youtube.com/watch?v=aFwVf5a3pZMYou should strive for your writing to be four things:
- Clear
- Organized
- Persuasive
- Valuable
And it is valuable that is the most important
Anytime you want to apply any sort of rule to your writing, you should be asking, "for which readers, and what purpose?"
- "Don't use jargon" ... maybe (probably not), but if so, be able to answer, "for which readers and what purpose?"
- "Use short sentences." ... well, for who and why?
No advice about writing makes any sense unless you've clarified who is reading and for what function.
Claim: the function of the text is change the way the readers think about the _world. You are a person with expertise, who spends time thinking and writing nuanced, complex, interesting, deep things in your area and how it relates to the world. What your write, the text, is the readers experience of that. Whether it is valuable to them depends on whether what you have written valuably changes what they think about the world (or what they do out in the world/in their job/in their life, or how they make decisions out in the ..., etc.).
Or to change the perspective on that: good writing is something that people will seek out (pay for even) because they want to read something that will change the way they think about the world.
Readers need you to make your text valuable to them. Instead of "does your writing check these boxes and follow these rules?" it is "does your writing provide value that readers are seeking?".
Larry strongly emphasizes the point that teachers throughout all of your years of education read what you had to write not because the writing was valuable to them, but because it was valuable to them that they were paid to read what you wrote and evaluate you for it. He goes on to say, this will never happen again after school, no one will be paid to read what you write. This isn't quite right though. There are a lot of jobs where a person doesn't have to be good at writing because everyone on their team (or on the project or whatever) is expected (and paid) to read it -- a report, documentation, a plan for a project, an email, etc. These things can be as well-written or poorly-written as you can imagine and we have to read them. Well-run and poorly-run meetings function the same way.
"the language game"
Why do people read something like the NYT? To be informed...maybe. But also, perhaps primarily, it is to be entertained. The editors at the NYT understand that.
You have to think about this holistically, which means you have to think about the reader, how they are encountering the text, and what their experience of it is. Once you are looking at it through that lens, you can notice the ways in which the medium gets in the way of the reader or enables the reader... to get the value they are seeking.
In what ways can the medium (text, content, whatever) interfere with the process of consumption (reading, viewing, whatever)?
Your job, as a writer, is to make sure the process of reading is valuable for the reader... all the time.
The importance of knowing your audience is knowing what they'll endure to get to the valuable part. If, in the case of the average NYT, it is "not much" that they will endure, then you need to provide constant value (whether that is quick-hitting sentences OR long sentences with value laced within). What you cannot do in that case is make them wait until the end to get the value, because they won't get there. You'll lose them before that. They quit, they read something else. That might work in an academic paper where the reader is willing to wait until the end of a dense paragraph, but that is a different language game.
did u ever read so hard u accidentally wrote?
https://blog.danslimmon.com/2025/03/14/did-u-ever-read-so-hard-u-accidentally-wrote/Reads (selects) won’t modify rows, but because of the internal bookkeeping that Postgres does, it can result in metadata that the WAL needs to write out to pages on disk. So in that sense, reads can result in writes.
This is such a relatable conclusion that echos many of my experiences chasing down odd production bugs:
Ops is like this a lot of the time. Once you get a working fix, you move on to whatever’s the next biggest source of anxiety. Sometimes you never get a fully satisfying “why.” But you can still love the chase.
Also, yes, Cybertec is always on top of it. One of the best in-depth Postgres blogs out there.
So I resort to Googling around for a while. I eventually land on this Cybertec blog post (there’s always a Cybertec post. God bless ’em), which demystifies shared buffers for me.
Claude Code - Anthropic
https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overviewI'm giving Claude Code a try. It is a terminal-based LLM agent that can iterate on software tasks with a human in the loop to prompt a task description, confirm or abort individual actions, and guide the process.
Claude Code does usage-based metering. As I'm currently looking at the "Buy Credits" page, there is an initial billing limit of $100 per month:
All new accounts have a monthly limit of $100 credits/month. This limit increases with usage over time.
After purchasing credits, I'm presented with a hero section large font that says:
Build something great
Once logged in to Claude Code in the terminal, I am first shown the following security notes:
Security notes:
1. Claude Code is currently in research preview
This beta version may have limitations or unexpected behaviors.
Run /bug at any time to report issues.
2. Claude can make mistakes
You should always review Claude's responses, especially when
running code.
3. Due to prompt injection risks, only use it with code you trust
For more details see:
https://docs.anthropic.com/s/claude-code-security
Why You Need Strong Parameters in Rails
https://www.writesoftwarewell.com/why-use-strong-parameters-in-rails/This includes an interesting bit of history about a GitHub hack that inspired the need for strong parameters in Rails.
If you want to see a real-world example, in 2012 GitHub was compromised by this vulnerability. A GitHub user used mass assignment that gave him administrator privileges to none other than the Ruby on Rails project.
The article goes on to demonstrate the basics of using strong params. It even shows off a new-to-me expect method that was added in Rails 8 that is more ergonomic than the require/permit syntax.
# require/permit
user_params = params.require(:user).permit(:name, :location)
# expect
user_params = params.expect(user: [:name, :location])
Quoting Dave Rupert
https://daverupert.com/2025/03/enshittification-has-a-flavor/Going forward, I think taste and style are more valuable than ever before. In an era where we’re able to rapidly generate cheap low quality content or software at a scale we’ve never seen before, we will need people with taste in the mix.
Release and EoL calendars for Amazon RDS for PostgreSQL
https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLReleaseNotes/postgresql-release-calendar.htmlAWS maintains a nice table of both the Community End of Life and RDS End of Standard Support dates for major versions of PostgreSQL. I had trouble finding this info all in a single place elsewhere.
PostgreSQL has an official page documenting their versioning policy and when the first and last release dates of each major version were. This doesn't include community end of life dates though.
The 70% problem: Hard truths about AI-assisted coding
https://addyo.substack.com/p/the-70-problem-hard-truths-aboutLLMs are no substitute for the hard-won expertise of years of building software, working within software teams, and evolving systems. You can squeeze the most out of iterations with a coding LLM by bringing that experience to every step of the conversation.
In other words, they're applying years of hard-won engineering wisdom to shape and constrain the AI's output. The AI is accelerating their implementation, but their expertise is what keeps the code maintainable.
The 70% problem
A tweet that recently caught my eye perfectly captures what I've been observing in the field: Non-engineers using AI for coding find themselves hitting a frustrating wall. They can get 70% of the way there surprisingly quickly, but that final 30% becomes an exercise in diminishing returns.
Addy goes on to describe this "two steps back pattern" where a developer using an LLM encounters an error, they ask the LLM to suggest a fix, the fix sorta works but two other issues crop up, and repeat.
This cycle is particularly painful for non-engineers because they lack the mental models to understand what's actually going wrong. When an experienced developer encounters a bug, they can reason about potential causes and solutions based on years of pattern recognition.
Beyond having the general programming and debugging experience to expedite this cycle, there is also an LLM intuition to be developed. I remember John Lindquist describing that he notices certain "smells" when working with LLMs. For instance, often when you're a couple steps into a debugging cycle with an LLM and it starts wanting to go make changes to config files, that is a smell. It's a "smell" because it should catch your attention and scrutiny. A lot of times this means the LLM is way off course and it is now throwing generative spaghetti at the wall. I learned two useful things from John through this:
- You have to spend a lot of time using different models and LLM tools to build up your intuition for these "smells".
- When you notice one of these smells, it's likely that the LLM doesn't have enough or the right context. Abort the conversation, refine the context and prompt, and try again. Or feed what you've tried into another model (perhaps a more powerful reasoning one) and see where that gets you.
Being able to do any of that generally hinges on having already spent many, many years debugging software and having already developed some intuitions for what is a good next step and what is likely heading toward a dead end.
These LLM tools have shown to be super impressive at specific tasks, so it is tempting to generalize their utility to all of software engineering. However, at least for now, we should recognize the specific things they are good at and use them for that:
This "70% problem" suggests that current AI coding tools are best viewed as:
- Prototyping accelerators for experienced developers
- Learning aids for those committed to understanding development
- MVP generators for validating ideas quickly
I'd at to this list:
- Apply context-aware boilerplate autocomplete — establish a pattern in a file/codebase or rely on existing library conventions and a tool like Cursor will often suggest an autocompletion that saves a bunch of tedious typing.
- Scaffold narrow feature slices in a high-convention framework or library — Rails codebases are a great example of this where the ecosystem has developed strong conventions that span files and directories. The LLM can generate 90% of what is needed, following those conventions. By providing specific rules about how you develop in that ecosystem and a tightly defined feature prompt, the LLM will produce a small diff of changes that you can quickly assess and test for correctness. To me this is distinct from the prototyping item suggested by Addy because it is a pattern for working in an existing codebase.
Now you don’t even need code to be a programmer. But you do still need expertise
https://www.theguardian.com/technology/2025/mar/16/ai-software-coding-programmer-expertise-jobs-threatThis quote about Simon is spot on and it is why I recommend his blog whenever I talk to another developer who is worried about LLM/AI advancement.
A leading light in this area is Simon Willison, an uber-geek who has been thinking and experimenting with LLMs ever since their appearance, and has become an indispensable guide for informed analysis of the technology. He has been working with AI co-pilots for ever, and his website is a mine of insights on what he has learned on the way. His detailed guide to how he uses LLMs to help him write code should be required reading for anyone seeking to use the technology as a way of augmenting their own capabilities. And he regularly comes up with fresh perspectives on some of the tired tropes that litter the discourse about AI at the moment.
It is tough to wade through both the hype and the doom while trying to keep tabs on "the latest in AI". Simon has an excitement for this stuff, but it is always balanced, realistic, and thoughtful.
The author then goes on to quote Tim O'Reilly on the subject of "what does this mean for programming jobs?"
As Tim O’Reilly, the veteran observer of the technology industry, puts it, AI will not replace programmers, but it will transform their jobs.
Which compliments the sentiment from Laurie Voss' latest post AI's effects on programming jobs which expects we will see a lot more programming jobs in the wake of an LLM transformation of the industry.
And as my friend Eric suggested, the Jevons Paradox may come in to play where programmers are the "resource" being more efficiently consumed which will, paradoxically, increase the demand for programmers.
AI's effects on programming jobs
https://seldo.com/posts/ai-effect-on-programming-jobsI would like to advance a third option, which is that AI will create many, many more programmers, and new programming jobs will look different.
What do we call the emerging type of programming job where a person is instructing or orchestrating AIs and LLMs to do work while not necessarily knowing the lower level details (code)?
Including "AI" in the name feels wrong though, it's got a horseless carriage feel. All programming will involve AI, so including it in the name will be redundant.
The statement “All programming will include AI.” caught me attention. It seems like an optional, even niche tool at the moment. The prediction here being that it will become ubiquitous, perhaps to the same degree as using an IDE or to using auto code formatters.
I find myself and lots of others pondering on the impact of LLMs on software development to want to generalize to one big brush stroke, but I think the reality is going to be closer to what is described here.
I think we will see all three at the same time. Some AI-assisted software development will raise the bar for quality to previously cost-ineffective heights. Some AI-driven software will be the bare minimum, put together by people who could never have successfully written software before. And a great deal will be software of roughly the quality we see already, produced in less time, and in correspondingly greater quantity.
The general sentiment of this post is that there will be more jobs, not fewer. And that the impact on salaries (existing ones at least) won’t be much.
Some words of caution though:
But the adjustment won't be without pain: some shitty software will get shipped before we figure out how to put guardrails around AI-driven development. Some programmers who are currently shipping mediocre software will find themselves replaced by newer, faster, AI-assisted developers before they manage to learn AI tools themselves. Everyone will have a lot of learning to do. But what else is new? Software development has always evolved rapidly. Embrace change, and you'll be fine.
Via Seldo on Bluesky
LLMs amplify existing technical decisions
https://bsky.app/profile/nateberkopec.bsky.social/post/3lkj4kp53gt2pIf you’ve made sustainable decisions and developed good patterns, LLMs can amplify those. If you’ve made poor technical decisions, LLMs will propagate that technical debt.
Technical debt accumulates when people just "glob on" one more thing to an existing bad technical decision.
We chicken out and ship 1 story point, not the 10 it would take to tidy up.
LLMs encourage this even more. See thing, make more of thing. Early choices get copied.
xkcd: Nerd Sniping
https://xkcd.com/356/I’m not sure that the term “nerd sniping” was coined by XKCD, but this seems like as good a reference as any.
“Mountain” Drill 🎱
https://www.instagram.com/reel/DHOT7GSO9aS/?igsh=emFmZ2I1c3hhajRkThis is a challenging drill for practicing moving the ball across the width of the table using the long rails with small adjustments in english on somewhat steep cut shots.
A pitch for jujutsu
https://lobste.rs/s/ozgd5s/can_we_communally_deprecate_git_checkout#c_icfohkjujutsu -- a version control system
I think what’s remarkable about Jujutsu is that it makes all those slightly exotic git workflows both normal and easy.
They go on to describe all these common git mechanics that jj
makes more accessible.
Jujutsu simplifies the UX around those operations significantly, and makes some changes to the git model to make them more natural. Squash has its own command, splitting a commit into two is a first-class operation, rebase only does what the name suggests it should, and you can go back to any previous commit and edit it directly without any ceremony. Jujutsu also highlights the unique prefix of commit/change IDs in its UI, which is a small UI change but it makes it much easier to directly address change IDs because you only have to type a few characters instead of copying and pasting 40 characters. If you run into a conflict during any of these operations you don’t have to fix it right away like git: they sit in the history and you can deal with them however you want.
And a comment right below this points out that because of jj
's interoperability with git
, you can collocate to start using jj
in an existing git
project right away.
My only advice to someone truly curious about jj is to drop into any code base you’re familiar with, jj git init --colocate, and work for week only using jj. It won’t be painful. You’ll occasionally be unfamiliar with how to achieve some workflow with jj, but the docs are good and you’ll figure it out.
The Japanese Grammar Index
https://www.tofugu.com/japanese-grammar/Someone on bluesky recommended this as a good supplement to other Japanese language learning resources.
These hubs connect grammar concepts to give you a deeper understanding of how Japanese works. Learn the ins and outs of Japanese word types, conjugations and forms, and how culture affects communication.
In the same thread, Tae Kim's Guide to Japanese was mentioned as well.
Dan Abramov, the OP, also recommended using Kana on iOS for practicing Hiragana and Katakana.
Understanding the bin, sbin, usr/bin, usr/sbin split
https://lists.busybox.net/pipermail/busybox/2010-December/074114.htmlI guess there is a discussion that floats around the internet every once and a while that attempts to backronym the/usr
directory as unix system resources
directory when really it is just what Ken Thompson and Dennis Richie called the bucket for all user space directories when they mounted a second drive.
The /bin vs /usr/bin split (and all the others) is an artifact of this, a
1970's implementation detail that got carried forward for decades by
bureaucrats who never question why they're doing things.
The whole listserv message is a good history of how these directories came to be and how bureaucracy has propagated that forward.
Squeeze the hell out of the system you have, by Dan Slimmon
https://blog.danslimmon.com/2023/08/11/squeeze-the-hell-out-of-the-system-you-have/This article is great because it gets at the higher-level thinking that engineering leads and CTOs need to bring to the table when your team is making high-impact technical decisions.
Anyone who has been in the industry a bit can throw around the pithy phrases we use to sway approval toward the decision we're pitching, e.g. "micro-services allow us to use the right tool for the job".
That can be a compelling argument alone if the stakes are low or we're not paying attention.
The higher-level thinking that needs to come in looks beyond the lists of Pros that we can make for any reasonable item that is put forward.
We have to have an understanding of tradeoffs and a more holistic sense of the costs.
But don’t just consider the implementation cost. The real cost of increased complexity – often the much larger cost – is attention.
The attention cost is an ongoing cost.
[Clever solution that adds complexity] complicates every subsequent technical decision.
Squeeze what you can out of the system, buying time, until you have to make a concession to complexity.
When complexity leaps are on the table, there’s usually also an opportunity to squeeze some extra juice out of the system you have.
because we squeezed first, we get to keep working with the most boring system possible.
Here’s how I use LLMs to help me write code
https://simonwillison.net/2025/Mar/11/using-llms-for-code/There are a bunch of great tips in here for getting better use out of LLMs for code generation and debugging.
The best way to learn LLMs is to play with them. Throwing absurd ideas at them and vibe-coding until they almost sort-of work is a genuinely useful way to accelerate the rate at which you build intuition for what works and what doesn't.
LLMs are no replacement for human intuition and experience. I've spent enough time with GitHub Actions that I know what kind of things to look for, and in this case it was faster for me to step in and finish the project rather than keep on trying to get there with prompts.
GitHub - yamadashy/repomix: 📦 Repomix
https://github.com/yamadashy/repomix📦 Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, DeepSeek, Perplexity, Gemini, Gemma, Llama, Grok, and more.
I used this for the first time to quickly bundle up a Python program into a single file that I could hand to Claude for help with a setup issue.
Assuming I'm already in the directory of the project, I can run:
$ npx repomix
I've been experimenting with mise
lately for managing tool versions like node
, so I'll use that.
Here I ask mise
to run npx repomix
in the context of Node.js v23:
$ mise exec node@23 -- npx repomix
It spit out a file called repomix-output.txt
.
I wanted to drag that file from Finder into the Claude app, so I then ran:
$ open . -a Finder.app
Claude 3.7 Sonnet and Claude Code \ Anthropic
https://www.anthropic.com/news/claude-3-7-sonnetAn AI coding tool that I use directly from the terminal?! 👀
Claude Code is available as a limited research preview, and enables developers to delegate substantial engineering tasks to Claude directly from their terminal.
"thinking tokens"? Does that mean the input and output tokens that are used as part of intermediate, step-by-step "reasoning"?
In both standard and extended thinking modes, Claude 3.7 Sonnet has the same price as its predecessors: $3 per million input tokens and $15 per million output tokens—which includes thinking tokens.
The Claude Code Overview shows how to get started installing and using Claude Code in the terminal.
They all use it, by Thorsten Ball
https://registerspill.thorstenball.com/p/they-all-use-itMostly online, but in an occasional real-world conversation someone will be expressing their disinterest and dissatisfaction with LLMs in the realm of software development and they'll say, "I tried it and it just made stuff up. I don't trust it. It will take me less time to build it myself than fix all its mistakes."
My immediate follow-up question is usually "what model / LLM tool did you use and when?" because the answer is often GitHub copilot or some free-tier model from years ago.
But what I want to do is step back here like Thorsten and ask, "Aren't you curious? Don't you want to know how these tools fit into what we do and how they might start to reshape our work?"
What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology that’s said to be fundamentally changing how we’ll program in the future. Absolutely, yes, that claim sounds ridiculous — but don’t you want to see for yourself?
The job requires constant curiosity, relearning, trying new techniques, adjusting mental models, and so on.
What I’m saying is that ever since I got into programming I’ve assumed that one shared trait between programmers was curiosity, a willingness to learn, and that our maxim is that we can’t ever stop learning, because what we’re doing is constantly changing beneath our fingers and if we don’t pay attention it might slip aways from us, leaving us with knowledge that’s no longer useful.
I suspect much of the disinterest is a reaction to the (toxic) hype around all things AI. There is too much to learn and try for me to let the grifters dissuade me from the entire umbrella of AI and LLMs. I make an effort to try out models from all the major companies in the space, to see how they can integrate into the work I do, how things like Cursor can augment my day-to-day, discussing with others what workflows, techniques, prompts, strategies, etc. can lead to better, exciting, interesting, mind-blowing results.
I certainly don't think the writing is on the wall for all this GenAI stuff, but it feels oddly incurious if not negligent to simply write it off.
It’s none of their business
https://jose.omg.lol/posts/its-none-of-their-business/I like this framing. The code we usually start with when our software system is relatively simple probably has a pretty high degree of changeability. So, the work is to make sure we preserve so that it remains easy to change as the system grows in complexity.
To borrow from Sandi Metz's excellent book: "Design is the art of preserving changeability". My argument is that moving the logic into a domain object like Notification, which the Job simply calls, makes the code more changeable than dumping everything into the job itself.
An Analogy for Software Development
https://www.codesimplicity.com/post/an-analogy-for-software-development/Try this analogy the next time you’re trying to explain to someone who isn’t a software developer what it is that a software developer does.
I do want to think about how condense to the size of it down to an elevator pitch so that if it comes up in casual conversation, I can relate the idea quickly.
MarkDownload: browser extension to clip websites and download them into a readable markdown file
https://github.com/deathau/markdownloadI learned about this one from Taylor Bell. It's a browser (Chrome) extension that can grab the main content for a page as markdown which you can then download. The reason this is useful is because you can drag and drop a file like this into a tool like Claude, Cursor, ChatGPT, etc.
Let's say you are writing some code that uses a specific, niche library. Instead of relying on the LLM to know the ins and outs of the library and knowing about aspects of the latest version of that library, you can grab a markdown version of their latest docs and pass that in as context.
Based on the documentation and code examples in the given file, can you rework the previous script to make sure it is using the latest features and best practices of this library?
Or a similar thing I did recently was: after reading a blog post on a new-ish PostgreSQL feature, I grabbed the markdown for the blog post, and asked Claude for some examples of using the feature described in the blog post, but tailored to a situation I went on to describe.
When we can provide highly-specific, up-to-date context like this to an LLM, we are going to get much better results than if we toss up a one-sentence request.
I installed the extension directly from the Chrome Web Store, but I primarily linked to the GitHub project because that gives you the option to tweak and manually install the extension if that's preferred.
Validating Data Types from Semi-Structured Data Loads in Postgres with pg_input_is_valid
https://www.crunchydata.com/blog/validating-data-types-from-semi-structured-data-loads-in-postgres-with-pg_input_is_validThe pg_input_is_valid
function is a handy way to check for whether rows of data conform to (are castable to) a specific data type. Let's say you have a ton of records with a payload text
column. Those fields mostly represent valid JSON, but some are non-JSON error messages. You could write an update
SQL statement like this:
UPDATE my_table
SET json_payload = CASE
WHEN pg_input_is_valid(payload, 'jsonb')
THEN payload::jsonb
ELSE '{}'::jsonb
END;
The post also shows off a nice technique for loading a ton of CSV data in where you may not be sure that every single row conforms to the various data types. It would be a shame to run a copy
statement that loads 95% of the data and then suddenly fails and rolls back because of an errant field.
Instead, load everything in as text
:
CREATE TEMP TABLE staging_customers (
customer_id TEXT,
name TEXT,
email TEXT,
age TEXT,
signup_date TEXT
);
-- copy in the data to the temp table
COPY staging_customers FROM '/path/to/customers.csv' CSV HEADER;
Then use an approach similar to what I described in the first code block to migrate the valid values over to other rows in the same or a different table.
In Elizabeth's example, the invalid records are ignored while the rest are moved to the new table:
INSERT INTO customers (name, email, age, signup_date)
SELECT name, email, age::integer, signup_date::date
FROM staging_customers
WHERE pg_input_is_valid(age, 'integer')
AND pg_input_is_valid(signup_date, 'date');
PostgreSQL Mistakes and How to Avoid Them
https://www.manning.com/books/postgresql-mistakes-and-how-to-avoid-themPostgreSQL Mistakes and How To Avoid Them reveals dozens of configuration and operational mistakes you’re likely to make with PostgreSQL. The book covers common problems across all key PostgreSQL areas, from data types, to features, security, and high availability. For each mistake you’ll find a real-world narrative that lays out context and recommendations for improvement.
I might have expected a book like this to be all about PostgreSQL-specific SQL and data modeling concepts. Even better, it also covers things like Performance, Administration features, Security, and High-Availability concepts.
Via LinkedIn
The Empty Promise of AI-Generated Creativity
https://hey.paris/posts/genai/This isn’t merely a technical limitation to be overcome with more data or better algorithms. It’s a fundamental issue: AI systems lack lived experience, cultural understanding, and authentic purpose—all essential elements of meaningful creative work. When humans craft stories, they draw upon personal struggles, cultural tensions, and genuine emotions. AI simply cannot access these wellsprings of authentic creation.
Avoid the nightmare bicycle
https://www.geoffreylitt.com/2025/03/03/the-nightmare-bicycleEmpower users to understand and use a product in whatever situation they might encounter.
Good designs expose systematic structure; they lean on their users’ ability to understand this structure and apply it to new situations. We were born for this.
Bad designs paper over the structure with superficial labels that hide the underlying system, inhibiting their users’ ability to actually build a clear model in their heads.
Alien’s ‘Standard Semiotic’, Pictograms and Icons
https://crewsproject.wordpress.com/2017/05/12/aliens-standard-semiotic-pictograms-and-icons/I just love the aesthetics of these pictographs and I love when movies go the extra mile on details like this.
Encountered this via this post.
AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
https://arstechnica.com/tech-policy/2025/01/ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots-txt/Fascinating!
Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models.
I imagine the basic idea is to have a page that a crawler would find if it ignored your robots.txt. It would be served with nonsense content and tons of internal dynamic links which go to more pages full of nonsense content and tons more internal dynamic links. Less of a maze and more of a never-ending tree.
But efforts to poison AI or waste AI resources don't just mess with the tech industry. Governments globally are seeking to leverage AI to solve societal problems, and attacks on AI's resilience seemingly threaten to disrupt that progress.
Weird to make this unqualified claim with no examples of how governments are trying to solve societal problems with AI -- I'm certainly having a hard time thinking of any.
IAM Identity Center SSO credentials vs. IAM user access keys
https://www.reddit.com/r/aws/comments/1abszvz/comment/kjufbl4/I've been working with AWS and am noticing that the docs and AWS Console consistently recommend the use of short-term SSO-based credentials with IAM Identity Center user over long-term/permanent IAM user access keys.
I liked how the person in this reddit thread put it:
Typically, you would log in via your identity provider, which then generates short lived, role based credentials. This removes the need for IAM user access keys living permanently on your workstation.
Another person posts this comprehensive set of steps which closely mirrors what I had to do setting up IAM Identity Center access for a project:
Quick start:
1. Enable Organizations (even if you have 1 personal account)
2. Enable IAM Identity Center (its own service, confusingly not part of IAM). Note the URL listed under "AWS access portal URL", you'll need that in a minute.
3. Create a User for yourself in IAM Identity Center (this is different than IAM users)
4. Go back to Organizations. Select Accounts, your account, and add your new user to the account with the permissions you want.
5. Get a cup of coffee, AWS takes a hot minute to sync up what you just did.
6. Browse to the Start URL you copied in step 2. Make sure you can log in and that you see the Account and the Role you setup in step 4. While you're logged in, do yourself a favor and add MFA to your new user.
7. Go to your terminal and type: aws configure sso You'll be asked to name the SSO session name, call it whatever you'd like. Next it wants the "Start URL", this is the URL from step 2 above.
8. Point your profile at it: export AWS_PROFILE=my-new-sso-profile
9. Finally we get to actually log in: aws sso login This will open a browser window and log you in through the Console. Once complete your aws cli will be logged in with a temporary, role based session, no long lived credentials on your machine at all.It's a long road to get here, but once you've got this setup it's a breeze to start your day with "aws sso login" and you've setup your account in a proper way that gives you a lot more options going forward. It's certainly more work than signing up for a TikTok account, but this is also a much more serious, professional product.
My LLM codegen workflow atm
https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/Flip the script by getting the Conversational LLM to ask you questions instead of you asking it questions.
Ask me one question at a time so we can develop a thorough, step-by-step
spec for this idea. Each question should build on my previous answers, and
our end goal is to have a detailed specification I can hand off to a
developer. Let’s do this iteratively and dig into every relevant detail.
Remember, only one question at a time.Here’s the idea:
...
This is a good way to hone an idea, rubberduck, and think through a problem space.
I tried a session of this with Claude Sonnet 3.7. It asked a lot of good questions and got me thinking. After maybe 8 or so questions I ran into the warning from Claude about the conversation getting too long and running into usage limits (not sure what to do about that speed bump yet).
Artichoke Hold Cocktail Recipe 🍹
https://www.diffordsguide.com/cocktails/recipe/6586/artichoke-holdI was at Warehouse Liquors in downtown Chicago a couple months ago. Talking to one of the employees about Cynar, they mentioned they used to bartend and one of their favorite things to make with it is a drink called Artichoke Hold.
They gave me the following recipe which varies a little from the Diffords one.
- 5 drops saline
- 0.5oz orgeat
- 0.75oz Lime juice (not lemon like diffords)
- 0.5oz Elderflower liqueur
- 0.75oz Cynar
- 0.75oz Smith & Cross (one of my favorite rums)
Shake with ice, pour over fresh pebble ice in a Mai Tai glass, garnish with a mint sprig.
The 10 Rules of Vibe Coding, quoting John Lindquist
https://x.com/johnlindquist/status/1894772728588284306
- Always start with deep research
- Always create a plan from the research
- Break the plan down into small, testable tasks
- AI Agents tackle one task at a time
- Agents must be able to verify their work
- Agents need context of the full project (file-tree, goals, etc), even for small tasks
- git is your friend
- Capture everything at the EOD, then deep research how to do better tomorrow
- When agents fail, go AFK and clear your head. Brute force never works and burns money.
- Dictate anything longer than a sentence
“git is your friend” resonates with me for any AI (or non-AI) coding because you can quickly overwrite a bunch of in-progress changes. Small, incremental diffs are the way.
What is vibe coding?
It may have been originally coined in this tweet by Andrej Karpathy:
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
"And it mostly works." 🥴
Build Your Own Database From Scratch in Go
https://build-your-own.org/database/Someone shared their Go-based database implementation on Reddit and mentioned that they used this book as a guide.
Understand databases from the bottom up by building your own, in small steps, and with simple Golang code.
- Start with a B+tree, the data structure for querying and manipulating the data.
- Make it durable, that’s what makes a DB different from a file.
- Relational DB with concurrent transactions on top of the copy-on-write B+tree KV.
- A SQL-like query language, the finishing touch.
What My Morning Journal Looks Like
https://tim.blog/2015/01/15/morning-pages/I was grabbing coffee with someone the other day. I mentioned that I sometimes go the whole day feeling like I have dozens of scattered things I need to get to and that it destroys my focus and sense of calm. They recommended starting the day with morning pages.
Morning pages don’t need to solve your problems. They simply need to get them out of your head, where they’ll otherwise bounce around all day like a bullet ricocheting inside your skull.
Comparison of the transaction systems of Oracle and PostgreSQL
https://www.cybertec-postgresql.com/en/comparison-of-the-transaction-systems-of-oracle-and-postgresql/Nice example of when deferrable constraints matter in PostgreSQL.
CREATE TABLE tab (id numeric PRIMARY KEY);
INSERT INTO tab (id) VALUES (1);
INSERT INTO tab (id) VALUES (2);
UPDATE tab SET id = id + 1;
ERROR: duplicate key value violates unique constraint "tab_pkey"
DETAIL: Key (id)=(2) already exists.
The reason is that PostgreSQL (in violation of the SQL standard) checks the constraint after each row, while Oracle checks it at the end of the statement. To make PostgreSQL behave the same as Oracle, create the constraint as DEFERRABLE. Then PostgreSQL will check it at the end of the statement.
Dr. Dave's Runout Drill System 🎱
https://drdavepoolinfo.com//bd_articles/2020/oct20.pdfThis is a level-based system from Novice to Pro that allows you to evaluate where you are currently at in your ability to run out a table. You can use it to check your ability now and then later to measure progress over time.
The Go Gopher
https://go.dev/blog/gopherThis article shares some of the history of how the Go gopher came about. It was designed by Renee French, originally for a radio station promotion in ~1999, and then adapted for Go in ~2009.
I also found a gopher drawing on French's blogspot from 2011 -- https://reneefrench.blogspot.com/2011/07/blog-post_31.html
Calling private methods without losing sleep at night
https://justin.searls.co/posts/calling-private-methods-without-losing-sleep-at-night/A little thing I tend to do whenever I make a dangerous assumption is to find a way to pull forward the risk of that assumption being violated as early as possible.
Tests are one way we do this, but tests aren’t well-suited to all the kinds of assumptions we make about our software systems.
We assume our software doesn’t have critical vulnerabilities, but we have a pre-deploy CI check (via brakeman) that alerts us when that assumption is violated and CVEs do exist.
Or as Justin describes in this post, we can have some invariants in our Rails initializer code to draw our attention to other kinds of assumptions.
From web developer to database developer in 10 years
http://notes.eatonphil.com/2025-02-15-from-web-developer-to-database-developer-in-10-years.htmlA one-year retrospective of Phil Eaton’s time at EnterpriseDB and the way he made his own path into database development to get there.
Advice for newsletter-ers
https://www.robinsloan.com/notes/newsletter-seasons/A personal email newsletter ought to be divided into seasons, just like a TV show.
The benefits being:
- a sense of progress: of going and getting somewhere.
- an opportunity for breaks: to pause and reflect, reconfigure.
- an opportunity, furthermore, to make big changes: in terms of subject, structure, style.
- an opportunity to stop: gracefully.
I like this as a way of setting expectations for yourself and your audience. I think it also opens up an opportunity to be playful and experiment. Having a finish line rather than a vanishing point on the horizon is energizing.
I see podcasts that go both directions on this. Some podcasters start recording with no timeline parameters in mind. Others approach it with a definite set of episodes they want to do. I'd never thought about translating that to a newsletter.
Shared by Matt Webb.
Git - Plumbing and Porcelain
https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-PorcelainI like this concept of Plumbing versus Porcelain CLI commands. I recently referenced it in Connect To Production Rails Console on AWS / Flightcontrol - Notes from VisualMode.
because Git was initially a toolkit for a version control system rather than a full user-friendly VCS, it has a number of subcommands that do low-level work and were designed to be chained together UNIX-style or called from scripts. These commands are generally referred to as Git’s “plumbing” commands, while the more user-friendly commands are called “porcelain” commands.
Reflections on 25 years of Interconnected
https://interconnected.org/home/2025/02/19/reflectionsjust an incredible quote from someone who has now been blogging for 25 years
I felt when I started in February 2000 that I was coming to blogging late. I procrastinated about setting it up. People I knew were already doing it.
And so much more interesting reflection and history of the emergence of blogging.
Everything starts with awareness. So be noisy about the precise things that I’m interested in, and see what happens. That means product design but also means nonsense about weird history or whatever.
End of the road for PostgreSQL streaming replication?
https://www.cybertec-postgresql.com/en/end-of-the-road-for-postgresql-streaming-replication/These kinds of performance / stress tests of PostgreSQL are always fascinating to me. There are many factors you need to consider about the machine’s stats, you have to set up multiple databases, maybe multiple clusters, and then after running a tool like pgbench, you have to make sure you’ve observed and captured useful data that you can draw conclusions from.
To try to quantify the crossover point, I ran a small test. I initialized a 8GB pgbench database that fits into shared buffers, then I set up WAL archiving and took a backup. Next, I ran 30min of pgbench with synchronous_commit=off to generate some WAL. On a 24 thread workstation, this generated 70GB of WAL containing 66M pgbench transactions, with an average speed of 36.6k tps. Finally, I configured Postgres to run recovery on the backup. This recovery was able to complete in 372 seconds, or 177k tps.
Here are some notes from AWS on benchmarking Postgres with pgbench: https://docs.aws.amazon.com/whitepapers/latest/optimizing-postgresql-on-ec2-using-ebs/postgresql-benchmark-observations-and-considerations.html
Learn Postgres at the Playground
https://www.crunchydata.com/blog/learn-postgres-at-the-playgroundThe folks at CrunchyData created a playground for playing around with and learning PostgreSQL... right in the browser.
SQL Tricks for More Effective CRUD is one of dozens of the tutorials they have since published.
This is made possible via WASM. Crazy Idea to Postgres in the Browser goes into more details about how they pulled it off.
It seems like there is a lot of energy moving in this direction. When I search "postgres wasm", several of the results are about PGlite from ElectricSQL (which is building a sync engine, a play in the local-first space).
supabase released what they are calling database.build that connects you to a Postgres database in the browser and gives you AI tools for interacting with that database.
the in-browser Postgres sandbox with AI assistance. With database.build, you can instantly spin up an unlimited number of Postgres databases that run directly in your browser (and soon, deploy them to S3). Each database is paired with a large language model (LLM)...
The secret to perfectly calculate Rails database connection pool size
https://island94.org/2024/09/secret-to-rails-database-connection-pool-sizetl;dr: don't compute the pool
size, set it to a big number, the pool size is the max that Rails enforces, but the thing that matters is the number of connections available at the database (which is a separate issue).
If, rather, you're running out of connections at the database, then try things like:
- reduce the number of Puma threads
- reduce background job threads (e.g. via GoodJob, Solid Queue, etc.)
- "Configure anything else using a background thread making database queries"
- among others
Or increase the number of connections available at the database with a tool like PgBouncer.
This post was written by the person who created GoodJob.
Solving Logic Puzzles with Microsoft's z3 SAT Soler
https://www.wdj-consulting.com/blog/logicpuzzle-z3/This post walks through how to formulate a logic puzzle as the declarative elements that a SAT solver like z3
knows how to take in. It can then determine if the given elements are satisfiable (SAT
) or not (UNSAT
). It can also produce the set of values that make it SAT
which is then a solution to the puzzle.
Here is the GitHub project for z3: https://github.com/Z3Prover/z3
Erich's Puzzle Palace
https://erich-friedman.github.io/puzzle/index.htmlThis is the kind of site that the internet was created for. A little HTML, a little CSS, a repeating puzzle piece background. Beautiful.
Here is another cool puzzle site that I found linked from the Interactive links: https://pedros.works/kudamono/pages/full-house.html
Target Pool Drills 🎱
https://billiards.colostate.edu/faq/drill/target/A couple different methods, drills, and tools for working on sinking a ball and moving the cue to a specific target on the table.
Here is what on redditor suggests as a better alternative to Target Pool.
Skyscrapers Puzzle
https://www.conceptispuzzles.com/index.aspx?uri=puzzle/skyscrapers/rulesI was browsing through this list of puzzles for one that caught my eye. Skyscrapers jumped out and I was curious how to play.
Based on the size of the grid, let's say it is 5, the rules are:
- every row needs to contain the numbers 1 through 5
- every column needs to contain the numbers 1 through 5
- thinking of the numbers in the grid as building heights, the arrowed-numbers indicate exactly how many buildings are in your sightline for that row or column. If the first number is a 5, then the sightline is 1. If there is a sequence of 3 | 2 | 4 | 5 | 1
, then the sightline is 3 because you can see 3
, then 4
, and then 5
, while the 2
is obscured by 3
, and the 1
is obscured by everything. Viewed from the other side, the sightline is 2 because you can see 1
, then 5
, and then nothing else.
A starting puzzle looks like this:
2 1
↓ ↓
+–––+–––+–––+–––+–––+
4→ | | | | | |
+–––+–––+–––+–––+–––+
4→ | | | | | |
+–––+–––+–––+–––+–––+
| | | | | | ←5
+–––+–––+–––+–––+–––+
| | | | | |
+–––+–––+–––+–––+–––+
| | | | | |
+–––+–––+–––+–––+–––+
↑ ↑
2 1
And the solution to that one looks like this:
2 1
↓ ↓
+–––+–––+–––+–––+–––+
4→ | 1 | 2 | 4 | 5 | 3 |
+–––+–––+–––+–––+–––+
4→ | 2 | 3 | 1 | 4 | 5 |
+–––+–––+–––+–––+–––+
| 5 | 4 | 3 | 2 | 1 | ←5
+–––+–––+–––+–––+–––+
| 3 | 5 | 2 | 1 | 4 |
+–––+–––+–––+–––+–––+
| 4 | 1 | 5 | 3 | 2 |
+–––+–––+–––+–––+–––+
↑ ↑
2 1
Other places to find these puzzles:
TIOBE Index - Popularity Index for Programming Languages
https://www.tiobe.com/tiobe-index/This index was completely new to me. I was interested to see Python, C++, and Java in the top three spots; SQL and Go holding strong at 7 and 8 respectively; Ruby at the 19th spot; Prolog, of all things, in the 20th spot; and lastly, the omission of TypeScript.
Heard about this via Nate Berkopec.
Modern Front-End Development for Rails, Second Edition: Hotwire, Stimulus, Turbo, and React by Noel Rappin
https://pragprog.com/titles/nrclient2/modern-front-end-development-for-rails-second-edition/I learned about this book via this post from rosa.codes.
In the replies to that post are several other resources recommend by the community:
- The Rails and Hotwire Codex: Build an app for web, iOS, and Android
- Hotrails - Learn modern Ruby on Rails with Hotwire
- Master Hotwire: Master Hotwire to Build Modern Web Apps with Rails Simplicity
puzz.link puzzle index
https://puzz.link/db/I was trying to track down a generalized name for the type of puzzle that LinkedIn's Queens puzzle falls into and I came across a reddit post that linked to this growing database of contributor-submitted puzzles.
I guess it is a variation of the Eight Queens Puzzle.
Here are the names of all the puzzles that you can search by in the database:
- aho
- akari
- akichi
- amibo
- angleloop
- anglers
- antmill
- aqre
- aquapelago
- aquarium
- araf
- armyants
- arukone
- ayeheya
- balance
- barns
- battleship
- bdblock
- bdwalk
- bonsan
- bosanowa
- box
- brownies
- canal
- castle
- cave
- cbanana
- cbblock
- chainedb
- chocona
- circlesquare
- cocktail
- coffeemilk
- cojun
- compass
- context
- coral
- country
- creek
- crossstitch
- cts
- curvedata
- dbchoco
- detour
- disloop
- dominion
- doppelblock
- dosufuwa
- dotchi
- doubleback
- easyasabc
- evolmino
- factors
- familyphoto
- fillmat
- fillomino
- firefly
- firewalk
- fivecells
- fourcells
- fracdiv
- geradeweg
- goishi
- gokigen
- guidearrow
- haisu
- hakoiri
- hanare
- hashi
- hebi
- herugolf
- heteromino
- heyablock
- heyabon
- heyawake
- hinge
- hitori
- icebarn
- icelom
- icelom2
- icewalk
- ichimaga
- ichimagam
- ichimagax
- interbd
- juosan
- kaero
- kaidan
- kaisu
- kakuro
- kakuru
- kazunori
- kinkonkan
- koburin
- kouchoku
- kramma
- kramman
- kropki
- kurochute
- kuroclone
- kurodoko
- kuromenbun
- kurotto
- kusabi
- ladders
- lapaz
- lightshadow
- lither
- lits
- lohkous
- lollipops
- lookair
- loopsp
- loute
- magnets
- makaro
- martini
- masyu
- maxi
- meander
- mejilink
- midloop
- minarism
- mines
- mintonette
- mirrorbk
- mochikoro
- mochinyoro
- moonsun
- mukkonn
- myopia
- nagare
- nagenawa
- nanameguri
- nanro
- nawabari
- news
- nikoji
- nondango
- nonogram
- norinori
- norinuri
- nothing
- nothree
- numlin
- numrope
- nuribou
- nurikabe
- nurimaze
- nurimisaki
- nuriuzu
- oneroom
- onsen
- ovotovata
- oyakodori
- paintarea
- parquet
- patchwork
- pencils
- pentatouch
- pentominous
- pentopia
- pipelink
- pipelinkr
- putteria
- ququ
- railpool
- rassi
- rectslider
- reflect
- remlen
- renban
- ringring
- ripple
- roma
- roundtrip
- sananko
- sashigane
- sashikazune
- sato
- scrin
- shakashaka
- shikaku
- shimaguni
- shugaku
- shwolf
- simplegako
- simpleloop
- skyscrapers (rules)
- slalom
- slashpack
- slitherlink
- snake
- snakeegg
- snakepit
- squarejam
- starbattle
- statuepark
- stostone
- sudoku
- sukoro
- sukororoom
- swslither
- symmarea
- tajmahal
- takoyaki
- tapa
- tapaloop
- tasquare
- tatamibari
- tateyoko
- tawa
- tentaisho
- tents
- teri
- tetrochain
- tetrominous
- tilepaint
- toichika
- toichika2
- tontonbeya
- tontti
- trainstations
- tren
- triplace
- tslither
- turnaround
- uramashu
- usoone
- usotatami
- view
- voxas
- vslither
- wafusuma
- wagiri
- walllogic
- waterwalk
- wblink
- wittgen
- yajikazu
- yajilin
- yajilin-regions
- yajisoko
- yajitatami
- yinyang
- yosenabe
New microblog with TILs
https://jvns.ca/blog/2024/11/09/new-microblog/There is something really cool about seeing other people adopt a practice of writing TIL-style posts and coming to the same realizations and conclusions that I did with mine.
TILs are great learning resources and reference resources:
I think this new section of the blog might be more for myself than anything, now when I forget the link to Cryptographic Right Answers I can hopefully look it up on the TIL page.
In Simon Willison's 1 Year TIL Retrospective, he points to how they reframe writing and publishing something:
The thing I like most about TILs is that they drop the barrier to publishing something online to almost nothing... The bar for a TIL is literally “did I just learn something?”—they effectively act as a public notebook.
and on the value of always having a learning mindset:
They also reflect my values as a software engineer. The thing I love most about this career is that the opportunities to learn new things never reduce—there will always be new sub-disciplines to explore, and I aspire to learn something new every single working day.
It was fun to read through both of these posts having just myself reflected on A Decade of TILs.
CONVENTIONS.md file for AI Rails 8 development
https://gist.github.com/peterc/214aab5c6d783563acbc2a9425e5e866Peter Cooper put together this file of conventions that a tool like Cursor or Aider should follow when doing Rails 8 development.
One issue that I constantly run into with Cursor is that it creates a migration file that is literally named something like db/migrate/[timestamp]_add_users_table.rb
, instead of suggesting/running the migration generator command provided by Rails. I'm curious if there is a way to effectively get these tools to follow that workflow -- generate the file with rails g ...
and then inject that file with the migration code.
Use Rails' built-in generators for models, controllers, and migrations to enforce Rails standards.
Maybe that rule is enough to convince Cursor to use the generator.
The Best Self-Hosted RSS Feed Readers
https://lukesingham.com/rss-feed-reader/I'm trying to avoid going down a self-hosted RSS feed reader rabbit hole, but still interesting to browse through this and see what the options are.
The winner was miniflux which appears to still be actively maintained.
The author linked at the bottom to Archive Box which is a self-hosted tool for archiving web pages, videos, etc. to avoid link rot for things that you want to always be able to access.
See also Best free RSS reader apps for Mac.
Where the Rails Devs Are
https://bsky.app/starter-pack/did:plc:qkn6arplsv2pmw2zredwzqkm/3kw3olx5gf72mSome lists and communities where you can find Ruby on Rails developers and what they are posting:
- Ruby on Rails Bluesky Starter Pack
- Ruby on Rails Community on Twitter
- Rails Subreddit
- Ruby on Rails group on LinkedIn
- GoRails Discord
See also Free Ruby on Rails Communities.
Build complex CLIs with type safety and no dependencies
https://bloomberg.github.io/stricli/I heard about this from Matt Pocock who says he is moving a tool he is building from Commander to StriCli.
CrossPoster from LlamaIndex
https://bsky.app/profile/llamaindex.bsky.social/post/3lhtokiscv22xI generally prefer the effort and authenticity of manually tailoring each cross-post to the social network it is going to. However, some things can easily be shared in the same way/format across Bluesky, Twitter, and Linkedin. For those, try using CrossPoster.
From hours to 360ms: over-engineering a puzzle solution
https://blog.danielh.cc/blog/puzzleI thought this was going to be a sudoku thing at first, but it appears to be a problem with a bigger problem space that you have solve exhaustively because you need to find every solution in order to guarantee that you've found the largest GCD.
The code/solutions presented by the author are in Rust.
The Brutal Drill 🎱
https://www.reddit.com/r/billiards/comments/1ij3no5/comment/mbbnjeb/a good drill for practicing frozen on the rails shots and good position / speed control
Five coding hats
https://dubroy.com/blog/five-coding-hats/It doesn't make sense to write code with the exact same approach regardless of the circumstances. Depending on the situation, it may be more appropriate to take a more fast and loose approach than a careful and rigorous one.
Scrappy and MacGyver Hats vs. Captains Hat
a real-world illustration of this that jumps to mind is "setting up payment processing for my app"
common approach: fully wire up stripe webhooks with your backend logic and database, etc.
scrappy approach: stripe payment alerts me, I go into Rails console and add user record for their email
Mistakes are part of the process
https://bsky.app/profile/pketh.org/post/3lhgqaedgyb2tNo one wants to make ‘ugly’ art, or write a ‘bad’ document. But struggling and making mistakes is an important part of developing skills and – increasingly important – finding your own voice.
Postfix Setup for Action Mailbox
https://stackoverflow.com/a/61629911/535590This stackoverflow answer provides a ton of good detail how on to get Postfix setup with Rails' Action Mailbox.
I also wanted to document the steps I took with the added detail that this is for a Hatchbox and Hetzner configuration:
Find the email that Hatchbox sent ([Hatchbox] Your new server password) with the password for the
deploy
user to be able to run commands asroot
withsudo
. This is needed for a couple things.Hatchbox / Hetzner do not have
postfix
installed, so it needs to be done manually.$ sudo apt-get update $ sudo apt-get install postfix
Create the virtual mailbox mapping file which tells Postfix what our catch-all email recipient will be —
sudo vi /etc/postfix/virtual
.mydomain.tld anything @mydomain.tld catch-all@mydomain.tld
The first line tells Postfix that we accept email for our domain. The second line is the catch-all line which takes any mail that doesn't match a specific address and sends it to
catch-all@mydomain.tld
assuming thecatch-all
user exists.Create the
catch-all
user.$ sudo useradd --create-home --shell /sbin/nologin catch-all
Add Postfix transport file which indicates the name of the thing that is going to forward emails to Rails —
sudo vi /etc/postfix/transport
:mydomain.tld forward_to_rails:
Compile
virtual
andtransport
into Berkley DB files withpostmap
. This can be done from whatever directory you're already in.$ sudo postmap /etc/postfix/virtual $ sudo postmap /etc/postfix/transport
Notice in
ls /etc/postfix
that there is now avirtual.db
andtransport.db
file.Add this email forwarding script (with the command mentioned in the Rails Action Mailbox docs for Postfix) to
/usr/local/bin
in a file likeemail_forwarder.sh
:#!/bin/sh cd /home/deploy/visualmode-dev-rails-app/current && bin/rails action_mailbox:ingress:postfix URL='https://mydomain.tld/rails/action_mailbox/relay/inbound_emails' INGRESS_PASSWORD='ingress_password'
Note that this command needs a
URL
which is your fully-qualified URL followed by/rails/action_mailbox/relay/inbound_emails
. It also needs theINGRESS_PASSWORD
that Rails will be configured with, such as with theRAILS_INBOUND_EMAIL_PASSWORD
env var.Then at the end of the
master.cf
file, I added the following lines, ensuring not to modify anything else in there —sudo vi /etc/postfix/master.cf
:forward_to_rails unix - n n - - pipe flags=Xhq user=deploy:deploy argv=/usr/local/bin/email_forwarder.sh ${nexthop} ${user}
Since my user is already called
deploy
, I can leave theuser=deploy:deploy
as is.Update the
main.cf
file with mapping and transport files —sudo vi /etc/postfix/main.cf
:transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual
Then to make sure that Postfix is aware of all the latest changes, I reload it.
$ sudo postfix reload $ sudo systemctl reload postfix
UltraHook - Receive webhooks on localhost
https://www.ultrahook.com/UltraHook makes it super easy to connect public webhook endpoints with development environments.
Free tool and Ruby gem for testing webhooks in local development by exposing your app to the public web.
Reminds me of webhook.site another tool for this kind of local dev testing.
Ultrahook was recommended as part of this Postmark tutorial on setting up inbound email handling in Rails.
From pain to productivity: How I learned to code with my voice
https://whitep4nth3r.com/blog/how-i-learned-to-code-with-my-voice/Primary tools used:
- Talon
- Cursorless
- Apple Voice Control
- Rango
The Attention Trap
https://www.commonwealmagazine.org/attention-trapattention has of course always mattered... But it is because we have today found a way of treating something fundamentally intangible—the stream of consciousness—as fungible into other kinds of goods (like money or data) that it has become an object charged with public concern.
smartphone :: rosary
One might say that the monastery was the first attention platform, the first setting in which attention and its disciplines were a central and explicit issue—and not only in the day’s large blocks of activity, but in the cracks and breaks between them as well. For this reason, the philosopher Byung-Chul Han has compared the smartphone to the rosary.
Increase profits by finding the most inviting hue of blue
Google (to take one well-known example) claims to have made an additional $200 million dollars in 2014 solely by having tuned its advertising links to precisely the right shade of blue.
Severance vibes
Our fragmented sensory experience during these packets of time is reified into data. Meanwhile, our experience of time is severed from the past and future and exiled into a kind of eternal present.
This is how I run. I don't listen to music or audiobooks. I don't hit the pavement with specific things to think through. I just run and my mind (and attention) ebbs and flows to the rhythm of my route and my mind.
Instead, we might try to reconceive attention not as a moment or as a product of a momentary decision but as a rhythm inhering within a pattern of life.
Article shared by David Crespo.
Pika – Perfect Code Screenshots
https://pika.style/templates/code-imageThere are a number of tools out there for pasting in a snippet of code to get a nice looking screenshot for social media posts. This one is the tool I currently prefer.
I used to use carbon quite a bit in the past, but the last number of times I used it, the interface was janky.
Brag Book with Automation
https://bsky.app/profile/minamarkh.am/post/3lhbzikqkj22eTaylor posted a great reminder about maintaining a brag book:
Yearly reminder to create your Brag Book to increase your chances of getting promoted, having more meaningful review cycles, or just lifting your spirits any time you feel "I haven't done anything!".
When you have an impactful win, add it to your Brag Book.
And then Mina shared the idea of setting up an automation in something like Slack as a low-friction way of aggregating these kinds of wins and affirmations in one place.
The BLACKHOLE Storage Engine :: MySQL
https://dev.mysql.com/doc/refman/8.4/en/blackhole-storage-engine.htmlReading through the ActiveRecord Migrations docs I came across an example demonstrating how to specify database-specific options like ENGINE=BLACKHOLE
.
What is ENGINE+BLACKHOLE
I wondered.
The BLACKHOLE storage engine acts as a “black hole” that accepts data but throws it away and does not store it. Retrievals always return an empty result
The provide the following code block to demonstrate the above:
mysql> CREATE TABLE test(i INT, c CHAR(10)) ENGINE = BLACKHOLE;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO test VALUES(1,'record one'),(2,'record two');
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM test;
Empty set (0.00 sec)
Reading further into the MySQL docs on this, there are all kinds of interesting behaviors and use cases:
Suppose that your application requires replica-side filtering rules, but transferring all binary log data to the replica first results in too much traffic. In such a case, it is possible to set up on the replication source server a “dummy” replica process whose default storage engine is BLACKHOLE
Data migrations with the `maintenance_tasks` gem
https://railsatscale.com/2023-01-04-how-we-scaled-maintenance-tasks-to-shopify-s-core-monolith/article.htmlThe maintenance_tasks
gem from Shopify is a mountable Rails engine for running one-off data migrations from a UI that is separate from the schema migration lifecycle.
In the past, I've used the after_party
gem for this use case. That gem runs data migrations tasks typically as part of a post-deploy process with the same up/down distinction as schema migrations.
It seems the big difference with maintenance_tasks
is that they are managed from a UI and that there are many more features, such as batch, pausing, rerunning, etc. You can observe the progress of these tasks from the UI as well.
There is a Go Rails Episode about how to use the maintenance_tasks
gem.
DigitalOcean Spaces | S3-Compatible Object Storage
https://www.digitalocean.com/products/spacesDigitalOcean has S3-compatible object storage that is super affordable with transparent pricing.
$5 per month for:
- 250 GiB storage
- 1 TiB outbound transfer
- $0.02/GiB additional storage
- $0.01/GiB additional transfer
Is it worth writing about? | notes.eatonphil.com
https://notes.eatonphil.com/is-it-worth-writing-about.htmlThis article and the following quote came up in Chapter 1 of Writing for Developers.
Even if you're writing about a popular topic, there's still a chance your post gets through to someone in a way other posts do not.
Why write? To practice writing so that you can write better in all sorts of venues and formats. To cement understanding because putting it in words makes you go a step deeper and wrestle with the hazy parts. To provide your perspective in case that is more accessible to someone than whatever else is out there. To demonstrate your expertise.
I usually try to wrap my head around the aspects I don’t understand before hitting publish. But for when I don’t have time to do that, I can call out that it is a gap in understanding.
Write to explain and teach. When you don't understand something, call out that you don't understand it. That's not a bad thing, and the internet is normally happy to help.
The Slotted Counter Pattern — PlanetScale
https://planetscale.com/blog/the-slotted-counter-patternTo avoid contention on a single high-write counter column, you can create a generic slotted counter table with polymorphic associations to anything that needs counting. Then to update the count, you increment the count in one of one hundred random slots. You can either sum the counts for a specific record and type to get the count, or you can have a process to roll up the count periodically and store it nearer to the original record.
I wonder what heuristics you could use to scale the number of slots you use for a given entity. That way for a relatively low-update entity, you spread the counts over, say, 3 counter slots. And with a very high-update entity, you spread it across, say, 50 or 100 slots.
Useful label-based GitHub Actions workflows
https://www.jessesquires.com/blog/2021/08/24/useful-label-based-github-actions-workflows/#updated-21-march-2022I was looking at how others have implemented a GitHub Action to trigger a failed check on a PR when it is tagged with certain labels, such as No Merge
or WIP
.
I came across this blog post which included an updated code block on how to achieve this without any 3rd-party actions. This is also the simplest example I found coming in at 18 lines.
My contribution was to alter the if
check to look at multiple labels:
jobs:
do-not-merge:
if: ${{ contains(github.event.*.labels.*.name, 'no merge') || contains(github.event.*.labels.*.name, 'wip') }}
I also printed out the labels to add a bit more detail in the action log:
steps:
- name: Check for label
run: |
echo "Pull request label prevents merging"
echo "Labels: ${{ join(github.event.*.labels.*.name, ', ') }}"
echo "This workflow fails so that the pull request cannot be merged"
exit 1
TIL'd: Use Labels To Block PR Merge
Tor Lowry's Stroke Drill 🎱
https://www.reddit.com/r/billiards/comments/1ie3j1h/comment/ma4icr3/Tor Lowry’s stroke drill is to stop playing pool. No shooting balls or playing games until you finish the drill. It may take a week.
The idea being that you need to force yourself to completely overhaul your stroke by fully focussing on consistency that gets baked into your muscle memory.
You [put] a ball on the headstring at the top of the table. And you shoot it like a direct scratch straight into a corner pocket on the other side of the table. You do that 1,000+ times. You can use object balls as cueballs for this and go through 16 at a time. But the important thing is that each one is treated like a real shot in terms of fundamentals.
Nothing fancy. 1000+ center ball shots right into the heart of the pocket. Nothing else to distract your focus from applying all the fundamentals to each of those 1000+ shots.
Set up the ball. Stand behind the shot (scratch direct to the pocket). Plant your back foot on the shooting line. Step your front foot forward as you get down and place your stick on the shooting line. Dress the tip up to the cueball for a center ball strike. Couple practice strokes. Pause. Few more micro strokes within an inch of the call. Pause. Easy pull back. Smooth transition. Controlled and assertive delivery. And you must follow through at least 6” past the cueball resting the tip on the table at the finish. That last sentence is the most important. Put a sticker on the table of where the tip must finish if you need to.
This is why we're just hitting scratch shots. There are a dozen aspects of the pre-shot and shot routine that we instead need to be focused on.
It does not feel intuitive to me that a center-ball shot is going to result in follow-through that leaves the cue tip resting on the table. That feels like a big adjustment. It probably helps to cement a full, smooth follow-through.
Here’s what you’re doing. You’re committing good fundamentals to muscle memory. And you’re doing it through repetition. And you can’t skip it. You can’t say you’re already good on it. You can’t know this, you have to earn it. Just like my stroke is garbage left handed. I can’t “know” my stroke to be better left handed. It takes repetition just like a pianist practicing scales slowly so that later on they can play fast and intuitively. You need to grind at repetition to make the mind-body connection of intention to brain to nerves to muscles to be well worn in to muscle memory that you can’t do it wrong.
Repetition of intentional, deliberate practice to unlearn bad practices and solidify good ones. This way you can eventually shoot well with muscle memory rather than thinking through every granular piece of a good stroke.
Creating and using events - Fathom Analytics
https://usefathom.com/docs/events/overviewFathom has an SDK for tracking various events such as link clicks, newsletter signups, and page loads. This will be displayed in the dashboard as event completions.
Smidgeons Stream | Maggie Appleton
https://maggieappleton.com/smidgeonsI'm intrigued by this form of micro-content that Maggie Appleton is calling a Smidgeon. It is similar to Simon Willison's blogmark concept. The notable difference to me is that they aren't explicitly tied to some external URL.
With a blogmark, I'm linking to some blog, resource, whatever on the internet and tying some of my own commentary to it. Whereas with a smidgeon, I don't necessarily need to lead with a URL. One might just be a small bit of freeform thought.
Built-in Rails Database Rake Tasks
https://github.com/rails/rails/blob/1dd82aba340e8a86799bd97fe5ff2644c6972f9f/activerecord/lib/active_record/railties/databases.rakeIt's cool to read through the internals of different rake tasks that are available for interacting with a Rails database and database migrations.
For instance, you can see how db:migrate
works:
desc "Migrate the database (options: VERSION=x, VERBOSE=false, SCOPE=blog)."
task migrate: :load_config do
ActiveRecord::Tasks::DatabaseTasks.migrate_all
db_namespace["_dump"].invoke
end
First, it attempts to run all your migrations. Then it invokes _dump
which is an internal task for re-generating your schema.rb
(or structure.sql
) based on the latest DB schema changes.
Rails Database Migrations Best Practices
https://www.fastruby.io/blog/db-migrations-best-practices.htmlMeant to be deleted
I love this idea for a custom rake task (rails db:migrate:archive
) to occasionally archive past migration files.
# lib/tasks/migration_archive.rake
namespace :db do
namespace :migrate do
desc 'Archives old DB migration files'
task :archive do
sh 'mkdir -p db/migrate/archive'
sh 'mv db/migrate/*.rb db/migrate/archive'
end
end
end
That way you still have access to them as development artifacts. Meanwhile you remove the migration clutter and communicate a reliance on the schema file for standing up fresh database instances (in dev/test/staging).
Data migrations
They don't go into much detail about data migrations. It's hard to prescribe a one-size-fits-all because sometimes the easiest thing to do is embed a bit of data manipulation in a standard schema migration, sometimes you want to manually run a SQL file against each database, or maybe you want to set up a process for these changes with a tool like the after_party
gem.
Reversible migrations
For standard migrations, it is great to rely on the change
method to ensure migrations are reversible. It's important to recognize what kinds of migrations are and aren't reversible. Sometimes we need to write some raw SQL and for that we are going to want up
and down
methods.
The ideal viewport doesn’t exist
https://viewports.fyi/There is a lot of interesting data in this article about the wide variations in viewport size. The particular thing that stuck out to me is that even for a single iPhone, you can easily have at least three different viewport sizes based on viewing context — Safari vs. in-app browser vs. 3D touch preview are the ones they show.
Found on Bluesky.
Keyboard shortcuts for Gmail
https://support.google.com/mail/answer/6594?sjid=3871484880356791029-NCI've been using gmail since something like 2006, but it never occurred to me to see if they have keyboard shortcuts or learn how to use them.
As a big-time vim user and fan of tools like vimium, I figure it is long overdue that I fix that. Note: if you want to try these, make sure you have Keyboard Shortcuts enabled from Gmail Settings.
Here are the shortcuts that I'm finding most useful at this point for how I use gmail today:
?
to open a popover display of all shortcutsj
andk
to move down and up the threadlist of emails in my inboxe
to archive the focused message#
to delete the focused messagez
to undo the last action (e.g. whoops, didn't mean to delete that message)
Some ones that I'm not using yet, but seem worth building some muscle memory around:
shift+t
to make a task list item from the current message/conversationcmd+opt+,
to send focus back to the inboxb
to snooze a conversation
Parameter Type Inference - could not determine data type of parameter $1
https://github.com/adelsz/pgtyped/issues/354Odd PostgreSQL thing related to Prepared Statement / Parameter Type Inference I'm still trying to unravel.
I had the following bit of ActiveRecord query:
@tags =
Tag.where("? is null or normalized_value ilike ?", normalized_query, "%#{normalized_query}%")
.order(:normalized_value)
.limit(10)
which short-circuits the filter (where
) if normalized_query
is nil
. This worked in development when the normalized_query
value was and wasn't present.
However, as soon as I shipped this to production, it was failing. I found the following error in the logs:
Caused by: PG::IndeterminateDatatype (ERROR: could not determine data type of parameter $1)
I fixed it by rewriting the query to type cast to text
which made postgres no longer unsure in production what the type of the parameter would be:
@tags =
Tag.where("cast(? as text) is null or normalized_value ilike ?", normalized_query, "%#{normalized_query}%")
.order(:normalized_value)
.limit(10)
Yay, fixed. Buuut, I don't get why this worked in dev, but not production. My best guesses are either that there is some different level of type inference that production is configured for (seems unlikely) or that the prepared statement in production gets prepared with different type info. Perhaps different connections are getting different prepared statement versions which might lead to it being flaky?
This is weird. Any idea what could be going on here?
Interestingly, I found a typescript project that was reporting the EXACT same issue for the EXACT same type of query -- https://github.com/adelsz/pgtyped/issues/354
Email Regexp is 23k
https://code.iamcal.com/php/rfc822/full_regexp.txtI prefer something dumber like /\S+@\S+\.\S+/
, but I guess someone has to be thorough.
Shared by Sam Rose
Rails Controller Testing: `assigns()` and `assert_template()` removed in Rails 5
https://github.com/rails/rails/issues/18950Issue: Deprecate assigns() and assert_template in controller testing · Issue #18950 · rails/rails · GitHub
Testing what instance variables are set by your controller is a bad idea. That's grossly overstepping the boundaries of what the test should know about. You can test what cookies are set, what HTTP code is returned, how the view looks, or what mutations happened to the DB, but testing the innards of the controller is just not a good idea.
If you still want to be able to do this kind of thing in your controller or request specs, you can add the functionality back with rails-controller-testing
.
You Probably Don't Need Query Builders
https://mattrighetti.com/2025/01/20/you-dont-need-sql-buildersthe tl;dr of this article is that you can avoid a bunch of ORM/query building and extraneous app logic by leaning on the expressiveness and capability of SQL.
The query building example in this post is a good illustration of why where 1 = 1
shows up in some SQL queries, usually in the logs from an ORM.
Interesting: one example uses Postgres' cardinality(some_array) = 0
to check if an array is empty or not. For one-dimensional array, cardinality
is a bit more straightforward than array_length
and requires only one argument.
Of further note, cardinality
determines the number of items in an array regardless of how many dimensions an array is.
> select cardinality(array[1,2]);
+-------------+
| cardinality |
|-------------|
| 2 |
+-------------+
> select cardinality(array[[1,2], [3,4]]);
+-------------+
| cardinality |
|-------------|
| 4 |
+-------------+
> select cardinality(array[[[1,2,3], [4,5,6], [7,8,9]]]);
+-------------+
| cardinality |
|-------------|
| 9 |
+-------------+
Tampopo | IMDB
https://www.imdb.com/title/tt0092048/Recommended to me as a good, post-Fordism Japanese movie
The Cost of Going it Alone
https://blogs.gnome.org/bolsh/2011/09/01/the-cost-of-going-it-alone/A couple historical lessons learned about using, building on, and contributing to free (open source) software.
- The longer your changes are off the main branch, the harder and more expensive they are going to be to integrate. This is true of a feature branch on your own software project and of "out of tree" changes to a large open-source project like Linux.
- If a major dependency of your business is an open-source software project (e.g. Postgres, Ruby, Linux, etc.), you should probably be employing a contributor who has a strong relationship with that project. E.g. the various companies that have employed Aaron Patterson to work on Ruby.
Regarding the second bullet point, another common pattern now days is for open-source maintainers of important (but much smaller than, say, Linux-size) software projects to crowd-fund their work via GitHub sponsors from companies and individuals.
Shared by Jeremy Schneider on Linkedin.
Best place to learn to use PostgreSQL
https://www.reddit.com/r/PostgreSQL/comments/1i84wtv/best_place_to_learn_to_use_postgresql/A summary of the resources mentioned:
Speed matters: Why working quickly is more important than it seems
https://jsomers.net/blog/speed-mattersBeing able to do something quickly lowers the cognitive barrier to doing that thing more often.
This blog post is the best thing I've read "in defense of getting good at Vim". Sure, the learning curve is high and it can require a lot of configuration and memorization, but that is all in exchange for shrinking the time between thought and executing it on the computer.
The obvious benefit to working quickly is that you'll finish more stuff per unit time. But there's more to it than that. If you work quickly, the cost of doing something new will seem lower in your mind. So you'll be inclined to do more.
The general rule seems to be: systems which eat items quickly are fed more items. Slow systems starve.
When you're fast, you can quickly play with new ideas.
Part of the activation energy required to start any task comes from the picture you get in your head when you imagine doing it.
Complexity Has to Live Somewhere
https://ferd.ca/complexity-has-to-live-somewhere.html"Complexity has to live somewhere. If you are lucky, it lives in well-defined places... You give it a place without trying to hide all of it. You create ways to manage it. You know where to go to meet it when you need it."
It is useful for complexity to be abstracted away when it would otherwise detract from or complicate the task at hand. Then there are times where you need to interact with the complexity directly. These tasks will be best served by having a well-defined place where you know you can meet the complexity.
Regarding abstractions, I once heard something along the lines of, "a good abstraction is one that allows you to safely make assumptions about how something will work." In other words, with a good abstraction you don't have to reconfirm a litany of details, but can, for standard scenarios, make reasonable assumptions that save you time and mental overhead.
Putting it all together, good abstractions allow for beneficial assumptions, but when those assumptions aren't going to hold up, we ought to have a well-defined place to go wrangle with the complexity.
I came across this article while reading The Essence of Successful Abstractions — Sympolymathesy, by Chris Krycho.
Software lessons from Factorio
https://www.linkedin.com/posts/hillel-wayne_factorio-activity-7282805593428402176-xB8j/"Scalability and efficiency are fundamentally at odds. Software that maximizes the value of the provided resources will be much harder to scale up, and software that scales up well will necessarily be wasteful."
Moving on from React, a year later
https://kellysutton.com/2025/01/18/moving-on-from-react-a-year-later.html"One of the many ways this matters is through testing. Since switching away from React, I’ve noticed that much more of our application becomes reliably-testable. Our Capybara-powered system specs provide excellent integration coverage."
"When we view the lines of code as a liability, we arrive at the following operating model: What is the least amount of code we can write and maintain to deliver value to customers?"
Not all lines of code are equal, some cost more than others to write and to maintain ("carrying cost"). Some have a higher regression risk over time than others.
"When thinking about the carrying cost of different lines of code, maintaining different levels of robust tests reduces the maintenance fees I must pay. So, increasing my more-difficult-to-test lines of code is more expensive than increasing my easier-to-test lines of code."
Language, in as much as it relates to testability, is the metric of focus here. What other facets of code increase or decrease their "carrying cost"?
Good, Fast, Cheap: Pick 3 or Get None
https://loup-vaillant.fr/articles/good-fast-cheap"To get closer to the simplest solution, John Ousterhout recommends, you should design it twice. Try a couple different solutions, see which is simplest. Which by the way may help you think of another, even simpler solution."