Software Development (32 blogmarks)
← BlogmarksCode Review Is Not About Catching Bugs
https://www.davidpoll.com/2026/02/code-review-is-not-about-catching-bugs/When a human writes code, the code is an artifact of their reasoning process. You can review the code and infer the thinking behind it. When AI generates code, you lose that direct connection. The code might be perfectly functional but reflect no coherent design intent – or worse, reflect a design intent that’s subtly different from what the developer actually wanted.
Nobody Gets Promoted for Simplicity
https://terriblesoftware.org/2026/03/03/nobody-gets-promoted-for-simplicity/Incredible opening quote from Dijkstra:
Simplicity is a great virtue, but it requires hard work to achieve and education to appreciate. And to make matters worse, complexity sells better. --Edsger Dijkstra
It's easier to make a compelling narrative about a complexly architected, "robust" system that is super scalable. It's harder to have much to say about unrealized complexity avoided by a simpler solution.
Her work was better. But it’s invisible because of how simple she made it look. You can’t write a compelling narrative about the thing you didn’t build. Nobody gets promoted for the complexity they avoided.
Complexity is unavoidable at times. A frequent dichotomy I see is inherent versus accidental complexity. The author gets at a different distinction -- unearned complexity.
The issue isn’t complexity itself. It’s unearned complexity. There’s a difference between “we’re hitting database limits and need to shard” and “we might hit database limits in three years, so let’s shard now.”
Part of the solution here is to be careful about rewarding complexity institutionally as well as publicly and socially:
One more thing: pay attention to what you celebrate publicly. If every shout-out in your team channel is for the big, complex project, that’s what people will optimize for. Start recognizing the engineer who deleted code. The one who said “we don’t need this yet” and was right.
Agentic anxiety
https://jerodsanto.net/2026/02/agentic-anxiety/Something’s different this time, and I can say confidently this is the most unsure I’ve ever been about software’s future.
As Jerod puts it. It's not FOMO (fear of missing out) so much as it is FOBLB (fear of being left behind).
One fascinating part of our conversation with Steve Ruiz from tldraw started when he confessed that he feels bad going to bed without his Claudes working on something.
I think part of what is behind this is the same thing discussed in the HBR article, AI Doesn't Reduce Work--It Intensifies It, which points to this feeling that if we can do more we should be doing more.
Falling Into The Pit of Success
https://blog.codinghorror.com/falling-into-the-pit-of-success/I think this concept extends even farther, to applications of all kinds: big, small, web, GUIs, console applications, you name it. I’ve often said that a well-designed system makes it easy to do the right things and annoying (but not impossible) to do the wrong things. If we design our applications properly, our users should be inexorably drawn into the pit of success. Some may take longer than others, but they should all get there eventually.
I believe this also applies to our codebases. How can we design our internal systems, APIs, class interfaces, domain boundaries, abstractions, design systems, etc. to make it easier for ourselves and others on the team to do the right thing and hopefully avoid doing the wrong thing.
Plan to Throw One Away
https://course.ccs.neu.edu/cs5500f14/Notes/Prototyping1/planToThrowOneAway.htmlThe idea of plan to throw one away comes from Fred Brooks' The Mythical Man Month. The reasoning is that your first attempt to build a system is going to be a mess because there is so much you don't know. So, you might as well plan to throw that one away.
Sometimes we try to do this. We say we are going to build a prototype to explore a space and see if an idea works. More often than not those prototypes are what make it directly into production. It's hard to argue with working software, even if it has its warts.
There is also the Second System Effect to deal with. This idea also comes from Brooks.
The general tendency is to over-design the second system, using all the ideas and frills that were cautiously sidetracked on the first one.
We've eliminated so much risk by clearing up a bunch of unknowns, why not add some back in by layering in some extra concepts.
Conway's Law
https://martinfowler.com/bliki/ConwaysLaw.htmlIn Martin Fowler's words:
Conway's Law is essentially the observation that the architectures of software systems look remarkably similar to the organization of the development team that built it. It was originally described to me by saying that if a single team writes a compiler, it will be a one-pass compiler, but if the team is divided into two, then it will be a two-pass compiler. Although we usually discuss it with respect to software, the observation applies broadly to systems in general.
From Melvin Conway himself:
Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.
I've read about Conway's Law before and I see it get brought up from time to time in online discourse. Something today made it pop into my brain and as I was thinking about it, I felt that I was looking at it with my head cocked to the side a little, just different enough that it helped me understand it a little better.
I tend to work on small, distributed software teams that work in an async fashion. That means minimal meetings, primarily high-agency independent work, clear and distinct streams of work, and everyone making their own schedule to get their work done.
I had been thinking about the kinds of things you need to have in place in your codebase and software system to make that way of working work well. A monolith is compatible with minimal, async communication because there aren't lots of distributed pieces that need coordinating. As another example, deploying things behind feature flags so that they can be released incrementally on a schedule separate from deployments also lends itself to this way of working.
The way this teams have decided to be organized and to communicate has a direct impact on how we develop the software system and what the system looks like.
Developers spend most of their time figuring the system out
https://lepiter.io/feenk/developers-spend-most-of-their-time-figuri-7aj1ocjhe765vvlln8qqbuhto/The first thing that jumps out to me from the 2017 study is that on average 20-25% of developer time is spent navigating 😲
This anecdotally explains to me why I find vim so empowering and why I get so frustrated while trying to get where I need to be in other tools like VS Code. Vim (plus plugins) is optimized for navigation, so I can fly from place to place where I need to read code, check a test, make an edit, create a new file. Whereas in other ideas I feel like I’m trying to run in a swimming pool.
Rate Limiting Using the Token Bucket Algorithm
https://en.wikipedia.org/wiki/Token_bucketI was curious what it looked like to do metered access to a resource. Commonly when you talk about this topic, that resource is your own API that has a throughput ceiling. I was coming at this from the angle of an app with internal 3rd-party API calls that charge on a per-request basis. In that scenario I'd like to implement some level of spend control so that I don't wake up to a huge bill.
The Token Bucket Algorithm appears to be one common answer to this question.
Each consumer (maybe that is a user) of the app/API has a bucket of tokens and each token can be redeemed for one access of the limited resource. Each bucket can only fit so many tokens and you can decide how exactly you want to meter refilling the bucket. I think it is typically handled by periodically (e.g. "every X seconds") adding a token to each bucket that isn't full. The other extreme could be requiring a user to manually "refill" the bucket — i.e. recharge their account with more credits. Perhaps you may even want to mix in a heuristic that is guided by some global value like "% of max spend for the period."
I like the carnival analogy for the Token Bucket Algorithm described here: https://www.krakend.io/docs/throttling/token-bucket/#a-quick-analogy
Computers can be understood
https://blog.nelhage.com/post/computers-can-be-understood/any question I might care to ask (about computers) has a comprehensible answer which is accessible with determined exploration and learning.
The “with determined exploration and learning” is the important part here.
There is no magic. There is no layer beyond which we leave the realm of logic and executing instructions and encounter unknowable demons making arbitrary and capricious decisions. Most behaviors in one layer are comprehensible in terms of the concepts of the next layer, and all behaviors can be understood by digging down through enough layers.
Even recognizing and sorting out the different layers can be a great starting point. Then when you are thinking about a concept or issue, you can determine what layer is relevant to help set the context.
The trickiest bugs are often those that span multiple layers, or involve leaky abstraction boundaries between layers. These bugs are often impossible to understand at a single layer of the abstraction stack, and sometimes require the ability to view a behavior from multiple levels of abstractions at once to fully understand.
This tells us a lot about what we should strive for when trying to write clear, understandable code and whether we’ve done a good job when creating an abstraction.
Adopt a mindset of curiosity:
My advice for a practical upshot from this post would be: cultivate a deep sense of curiosity about the systems you work with. Ask questions about how they work, why they work that way, and how they were built.
… build your understanding, and your confidence that you can always understand more tomorrow.
My friend Jake Worth wrote his own version of this post — https://jakeworth.com/posts/computers-can-be-understood/
I think this is a great idea for any blogger. Take an inspiring or intriguing concept and give your own take on it.
When in doubt, be consistent
https://google.github.io/styleguide/shellguide.htmlUsing one style consistently through our codebase lets us focus on other (more important) issues. Consistency also allows for automation. In many cases, rules that are attributed to “Be Consistent” boil down to “Just pick one and stop worrying about it”; the potential value of allowing flexibility on these points is outweighed by the cost of having people argue over them.
Lots of great tips in this style guide on writing good bash scripts.
Other good (general) advice:
When assessing the complexity of your code (e.g. to decide whether to switch languages) consider whether the code is easily maintainable by people other than its author.
How to Understand a New Codebase Quickly
https://avdi.codes/how-to-understand-a-new-codebase-quickly/The best way I know to get acquainted with an unknown codebase is to fire it up locally (this alone may be very hard). Then approach softly, humbly, as a mere user. Start making theories about how it works, and what parts of the code are responsible for the behaviors you see.
This not only might be tricky to do, but you'll have to do it eventually anyway. You'll probably encounter several steps that are missing or aren't quite right. You can help with updating the docs where this setup slippage has occurred.
Then deliberately start breaking it.
This is a great idea. Make theories about what executes and how it executes. Then put it to the test with a little raise 'hell' here and there.
As you're approaching the app from the perspective of a user, think about the different kinds of users of the app and the unique workflows of each of those users. Also, are there important execution flows through the codebase that no user experiences or triggers, but instead are part of an external/automated process? Those flows can be hard to find and can be some of the most essential business logic.
The Best Programmers
https://justin.searls.co/links/2025-04-14-the-best-programmers/Justin wrote this post in response to The Best Programmers I Know which was making the rounds last week.
Anyway, if you're asking me, the single best trait to predict whether I'm looking at a good programmer or a great one is undoubtedly perseverance. Someone that takes to each new challenge like a dog to a bone, and who struggles to sleep until the next obstacle is cleared.
Sometimes you run into a wall with a bug, you’ve searched for it a dozen different ways, you’ve talked off Claude’s ear about it, and you still don’t have a fix. How do you not give up when it feels this hopeless? That’s the perseverance he is talking about, at least in part.
For me, much of this perseverance is earned. Meaning, I pushed through to a solution when I was hopelessly stuck the last dozen times, so I know I can do it this time.
Ship Software That Does Nothing
https://kerrick.blog/articles/2025/ship-software-that-does-nothing/Shipping nothing is nothing to sneeze at. Delivering a blank page to production means you’ve made a lot of important decisions. You must take risky actions early, which is the best time to take risks. And once you’ve shipped software that does nothing, making it do something is easy.
It’s fun to read this article because I came to a similar conclusion recently with a couple side-projects. Typically with my side-projects I never get to the point where I’m ready to ship. So then, I simple don’t ever ship anything. They gather dust in that code directory on my machine.
However, with still and another recent project, one of the first things I did before there was any meaningful functionality was get them deployed to the public internet. Suddenly that unlocked this clarity where I knew what features to work on next and which ones to ignore because I was actually using the thing.
One does not simply update one's dependencies
https://rosswintle.uk/2025/04/one-does-not-simply-update-ones-dependencies/It’s a fun thought experiment to look at a particular slice of your day-to-day job as a software dev and spin out all the layers and tangential things you need to know to make sense of that thing and do that thing.
Some fun ones to spin out would be:
- reviewing a pull request
- tracking down a bug in production
- “shipping” a feature
Programming is about mental stack management
https://justin.searls.co/posts/programming-is-about-mental-stack-management/Humans, like LLMs, have a context window too.
Fun fact: humans are basically the same. Harder problems demand higher mental capacity. When you can't hold everything in your head at once, you can't consider every what-if and, in turn, won't be able to preempt would-be issues. Multi-faceted tasks also require clear focus to doggedly pursue the problem that needs to be solved, as distractions will deplete one's cognitive ability.
What usually happens when we get too much context for the problem at hand is that we start to lose track of details, we get distracted, maybe even irritable. The task and the time it takes to do it balloon. We might even forgot why we start down this path in the first place.
As I was reading Justin’s example of the mental task stack he ended up down, I thought it sounded farfetched. “No one is going to do all these unnecessary things when they are just trying to add a route.” But then I remembered ALL the times that I get six steps removed from what I set out to do because of a series of unexpected mishaps and several “I can’t help myself”s.
Chesterton's Fence: Understanding past decisions
https://thoughtbot.com/blog/chestertons-fenceThere exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
I’ve often seen this concept used to talk about rashly throwing things away in a software development context. Anything from a misunderstood conditional check to an entire piece of infrastructure. “This doesn’t make sense, let’s refactor it.” Make it make sense first!
This is part of why I’m very cautious when it comes to refactoring legacy code. I’d much rather avoid the temptation of a few cathartic refactors while fixing some bug and avoid the pain of later finding out I broke a narrow use case.
It’s just as applicable to any setting/field where a (over-)confident person sees a policy, practice, or even something physical like a fence and is ready to get rid of it without understanding why it is there.
Semantic Diffusion
https://martinfowler.com/bliki/SemanticDiffusion.htmlI learned this term just now from Simon Willison who quoted the following from Martin Fowler:
Semantic diffusion occurs when you have a word that is coined by a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition. This weakening risks losing the definition entirely - and with it any usefulness to the term.
While Simon is lamenting the diffusion of the meaning of Vibe Coding, Martin, back in 2006, was seeing this happen with terms like Agile and Web2.0.
Semantic diffusion is essentially a succession of the telephone game where a different group of people to the originators of a term start talking about it without being careful about following the original definition. These people are listened to by a further group which then goes on to add their own distortions. After a few of these hand-offs it's easy to lose a lot of the key meaning of the term unless you make the point of going back to the originators. It's ironic that it's popular terms that tend to suffer from this the most. That's inevitable, of course, since unpopular terms have less people to create the telephone chains.
Hope is not lost though.
So terms do recover their semantic integrity and the current diffusion doesn't inevitably mean the terms will lose their meaning… A final comforting thought is that once the equally inevitable backlash comes we get a refocusing on the original meaning.
So once the hype dies down, the broader understanding of vibe coding may settle back on the kind of coding:
where you fully give in to the vibes, embrace exponentials, and forget that the code even exists… It's not too bad for throwaway weekend projects.
The 70% problem: Hard truths about AI-assisted coding
https://addyo.substack.com/p/the-70-problem-hard-truths-aboutLLMs are no substitute for the hard-won expertise of years of building software, working within software teams, and evolving systems. You can squeeze the most out of iterations with a coding LLM by bringing that experience to every step of the conversation.
In other words, they're applying years of hard-won engineering wisdom to shape and constrain the AI's output. The AI is accelerating their implementation, but their expertise is what keeps the code maintainable.
The 70% problem
A tweet that recently caught my eye perfectly captures what I've been observing in the field: Non-engineers using AI for coding find themselves hitting a frustrating wall. They can get 70% of the way there surprisingly quickly, but that final 30% becomes an exercise in diminishing returns.
Addy goes on to describe this "two steps back pattern" where a developer using an LLM encounters an error, they ask the LLM to suggest a fix, the fix sorta works but two other issues crop up, and repeat.
This cycle is particularly painful for non-engineers because they lack the mental models to understand what's actually going wrong. When an experienced developer encounters a bug, they can reason about potential causes and solutions based on years of pattern recognition.
Beyond having the general programming and debugging experience to expedite this cycle, there is also an LLM intuition to be developed. I remember John Lindquist describing that he notices certain "smells" when working with LLMs. For instance, often when you're a couple steps into a debugging cycle with an LLM and it starts wanting to go make changes to config files, that is a smell. It's a "smell" because it should catch your attention and scrutiny. A lot of times this means the LLM is way off course and it is now throwing generative spaghetti at the wall. I learned two useful things from John through this:
- You have to spend a lot of time using different models and LLM tools to build up your intuition for these "smells".
- When you notice one of these smells, it's likely that the LLM doesn't have enough or the right context. Abort the conversation, refine the context and prompt, and try again. Or feed what you've tried into another model (perhaps a more powerful reasoning one) and see where that gets you.
Being able to do any of that generally hinges on having already spent many, many years debugging software and having already developed some intuitions for what is a good next step and what is likely heading toward a dead end.
These LLM tools have shown to be super impressive at specific tasks, so it is tempting to generalize their utility to all of software engineering. However, at least for now, we should recognize the specific things they are good at and use them for that:
This "70% problem" suggests that current AI coding tools are best viewed as:
- Prototyping accelerators for experienced developers
- Learning aids for those committed to understanding development
- MVP generators for validating ideas quickly
I'd at to this list:
- Apply context-aware boilerplate autocomplete — establish a pattern in a file/codebase or rely on existing library conventions and a tool like Cursor will often suggest an autocompletion that saves a bunch of tedious typing.
- Scaffold narrow feature slices in a high-convention framework or library — Rails codebases are a great example of this where the ecosystem has developed strong conventions that span files and directories. The LLM can generate 90% of what is needed, following those conventions. By providing specific rules about how you develop in that ecosystem and a tightly defined feature prompt, the LLM will produce a small diff of changes that you can quickly assess and test for correctness. To me this is distinct from the prototyping item suggested by Addy because it is a pattern for working in an existing codebase.
LLMs amplify existing technical decisions
https://bsky.app/profile/nateberkopec.bsky.social/post/3lkj4kp53gt2pIf you’ve made sustainable decisions and developed good patterns, LLMs can amplify those. If you’ve made poor technical decisions, LLMs will propagate that technical debt.
Technical debt accumulates when people just "glob on" one more thing to an existing bad technical decision.
We chicken out and ship 1 story point, not the 10 it would take to tidy up.
LLMs encourage this even more. See thing, make more of thing. Early choices get copied.
Squeeze the hell out of the system you have, by Dan Slimmon
https://blog.danslimmon.com/2023/08/11/squeeze-the-hell-out-of-the-system-you-have/This article is great because it gets at the higher-level thinking that engineering leads and CTOs need to bring to the table when your team is making high-impact technical decisions.
Anyone who has been in the industry a bit can throw around the pithy phrases we use to sway approval toward the decision we're pitching, e.g. "micro-services allow us to use the right tool for the job".
That can be a compelling argument alone if the stakes are low or we're not paying attention.
The higher-level thinking that needs to come in looks beyond the lists of Pros that we can make for any reasonable item that is put forward.
We have to have an understanding of tradeoffs and a more holistic sense of the costs.
But don’t just consider the implementation cost. The real cost of increased complexity – often the much larger cost – is attention.
The attention cost is an ongoing cost.
[Clever solution that adds complexity] complicates every subsequent technical decision.
Squeeze what you can out of the system, buying time, until you have to make a concession to complexity.
When complexity leaps are on the table, there’s usually also an opportunity to squeeze some extra juice out of the system you have.
because we squeezed first, we get to keep working with the most boring system possible.
Here’s how I use LLMs to help me write code
https://simonwillison.net/2025/Mar/11/using-llms-for-code/There are a bunch of great tips in here for getting better use out of LLMs for code generation and debugging.
The best way to learn LLMs is to play with them. Throwing absurd ideas at them and vibe-coding until they almost sort-of work is a genuinely useful way to accelerate the rate at which you build intuition for what works and what doesn't.
LLMs are no replacement for human intuition and experience. I've spent enough time with GitHub Actions that I know what kind of things to look for, and in this case it was faster for me to step in and finish the project rather than keep on trying to get there with prompts.
GitHub - yamadashy/repomix: 📦 Repomix
https://github.com/yamadashy/repomix📦 Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, DeepSeek, Perplexity, Gemini, Gemma, Llama, Grok, and more.
I used this for the first time to quickly bundle up a Python program into a single file that I could hand to Claude for help with a setup issue.
Assuming I'm already in the directory of the project, I can run:
$ npx repomix
I've been experimenting with mise lately for managing tool versions like node, so I'll use that.
Here I ask mise to run npx repomix in the context of Node.js v23:
$ mise exec node@23 -- npx repomix
It spit out a file called repomix-output.txt.
I wanted to drag that file from Finder into the Claude app, so I then ran:
$ open . -a Finder.app
They all use it, by Thorsten Ball
https://registerspill.thorstenball.com/p/they-all-use-itMostly online, but in an occasional real-world conversation someone will be expressing their disinterest and dissatisfaction with LLMs in the realm of software development and they'll say, "I tried it and it just made stuff up. I don't trust it. It will take me less time to build it myself than fix all its mistakes."
My immediate follow-up question is usually "what model / LLM tool did you use and when?" because the answer is often GitHub copilot or some free-tier model from years ago.
But what I want to do is step back here like Thorsten and ask, "Aren't you curious? Don't you want to know how these tools fit into what we do and how they might start to reshape our work?"
What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology that’s said to be fundamentally changing how we’ll program in the future. Absolutely, yes, that claim sounds ridiculous — but don’t you want to see for yourself?
The job requires constant curiosity, relearning, trying new techniques, adjusting mental models, and so on.
What I’m saying is that ever since I got into programming I’ve assumed that one shared trait between programmers was curiosity, a willingness to learn, and that our maxim is that we can’t ever stop learning, because what we’re doing is constantly changing beneath our fingers and if we don’t pay attention it might slip aways from us, leaving us with knowledge that’s no longer useful.
I suspect much of the disinterest is a reaction to the (toxic) hype around all things AI. There is too much to learn and try for me to let the grifters dissuade me from the entire umbrella of AI and LLMs. I make an effort to try out models from all the major companies in the space, to see how they can integrate into the work I do, how things like Cursor can augment my day-to-day, discussing with others what workflows, techniques, prompts, strategies, etc. can lead to better, exciting, interesting, mind-blowing results.
I certainly don't think the writing is on the wall for all this GenAI stuff, but it feels oddly incurious if not negligent to simply write it off.
It’s none of their business
https://jose.omg.lol/posts/its-none-of-their-business/I like this framing. The code we usually start with when our software system is relatively simple probably has a pretty high degree of changeability. So, the work is to make sure we preserve so that it remains easy to change as the system grows in complexity.
To borrow from Sandi Metz's excellent book: "Design is the art of preserving changeability". My argument is that moving the logic into a domain object like Notification, which the Job simply calls, makes the code more changeable than dumping everything into the job itself.
An Analogy for Software Development
https://www.codesimplicity.com/post/an-analogy-for-software-development/Try this analogy the next time you’re trying to explain to someone who isn’t a software developer what it is that a software developer does.
I do want to think about how condense to the size of it down to an elevator pitch so that if it comes up in casual conversation, I can relate the idea quickly.
My LLM codegen workflow atm
https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/Flip the script by getting the Conversational LLM to ask you questions instead of you asking it questions.
Ask me one question at a time so we can develop a thorough, step-by-step
spec for this idea. Each question should build on my previous answers, and
our end goal is to have a detailed specification I can hand off to a
developer. Let’s do this iteratively and dig into every relevant detail.
Remember, only one question at a time.Here’s the idea:
...
This is a good way to hone an idea, rubberduck, and think through a problem space.
I tried a session of this with Claude Sonnet 3.7. It asked a lot of good questions and got me thinking. After maybe 8 or so questions I ran into the warning from Claude about the conversation getting too long and running into usage limits (not sure what to do about that speed bump yet).
Calling private methods without losing sleep at night
https://justin.searls.co/posts/calling-private-methods-without-losing-sleep-at-night/A little thing I tend to do whenever I make a dangerous assumption is to find a way to pull forward the risk of that assumption being violated as early as possible.
Tests are one way we do this, but tests aren’t well-suited to all the kinds of assumptions we make about our software systems.
We assume our software doesn’t have critical vulnerabilities, but we have a pre-deploy CI check (via brakeman) that alerts us when that assumption is violated and CVEs do exist.
Or as Justin describes in this post, we can have some invariants in our Rails initializer code to draw our attention to other kinds of assumptions.
TIOBE Index - Popularity Index for Programming Languages
https://www.tiobe.com/tiobe-index/This index was completely new to me. I was interested to see Python, C++, and Java in the top three spots; SQL and Go holding strong at 7 and 8 respectively; Ruby at the 19th spot; Prolog, of all things, in the 20th spot; and lastly, the omission of TypeScript.
Heard about this via Nate Berkopec.
Five coding hats
https://dubroy.com/blog/five-coding-hats/It doesn't make sense to write code with the exact same approach regardless of the circumstances. Depending on the situation, it may be more appropriate to take a more fast and loose approach than a careful and rigorous one.
Scrappy and MacGyver Hats vs. Captains Hat
a real-world illustration of this that jumps to mind is "setting up payment processing for my app"
common approach: fully wire up stripe webhooks with your backend logic and database, etc.
scrappy approach: stripe payment alerts me, I go into Rails console and add user record for their email
From pain to productivity: How I learned to code with my voice
https://whitep4nth3r.com/blog/how-i-learned-to-code-with-my-voice/Primary tools used:
- Talon
- Cursorless
- Apple Voice Control
- Rango
Software lessons from Factorio
https://www.linkedin.com/posts/hillel-wayne_factorio-activity-7282805593428402176-xB8j/"Scalability and efficiency are fundamentally at odds. Software that maximizes the value of the provided resources will be much harder to scale up, and software that scales up well will necessarily be wasteful."
Good, Fast, Cheap: Pick 3 or Get None
https://loup-vaillant.fr/articles/good-fast-cheap"To get closer to the simplest solution, John Ousterhout recommends, you should design it twice. Try a couple different solutions, see which is simplest. Which by the way may help you think of another, even simpler solution."