Food, Software, and Trade-offs
Everything has trade-offs, a set of attributes optimized and balanced towards a particular outcome.
You get X, but you lose Y.
Life is full of trade-offs. Anyone who says otherwise is trying to sell you something.
Everything has trade-offs, a set of attributes optimized and balanced towards a particular outcome.
You get X, but you lose Y.
Life is full of trade-offs. Anyone who says otherwise is trying to sell you something.
Unlike technical debt, which announces itself through mounting friction - slow builds, tangled dependencies, the creeping dread every time you touch that one module - comprehension debt breeds false confidence. The codebase looks clean. The tests are green. The reckoning arrives quietly, usually at the worst possible moment.
Margaret-Anne Storey’s describes a student team that hit this wall in week seven: they could no longer make simple changes without breaking something unexpected. The real problem wasn’t messy code. It was that no one on the team could explain why design decisions had been made or how different parts of the system were supposed to work together. The theory of the system had evaporated.
Knowing the code you are introducing to your codebase is so important. As soon as you offload that blindly to AI, the timebomb starts ticking.
I read one engineer say that the bottleneck has always been a competent developer understanding the project. AI doesn’t change that constraint. It creates the illusion you’ve escaped it.
And the inversion is sharper than it looks. When code was expensive to produce, senior engineers could review faster than junior engineers could write. AI flips this: a junior engineer can now generate code faster than a senior engineer can critically audit it. The rate-limiting factor that kept review meaningful has been removed. What used to be a quality gate is now a throughput problem.
The nightmare is when the AIs create such large PRs that make it so easy to miss wrong turns in code.
There’s also a specific failure mode worth naming. When an AI changes implementation behavior and updates hundreds of test cases to match the new behavior, the question shifts from “is this code correct?” to “were all those test changes necessary, and do I have enough coverage to catch what I’m not thinking about?” Tests cannot answer that question. Only comprehension can.
The tests pass...the code must work...right? Right?
You will pay for comprehension sooner or later. The debt accrues interest rapidly.
People, the context-bearers, have experience and capabilities that machines might never understand encoded in our muscles and memory. I’m on record saying I despise nuance –and I do– but it’s more important than ever to be able to connect to our fellow humans over this nuance so our world is not paved over by contextless opinions from ill-informed robots. Empower and believe people over machines.
Let me try to define the Good Web. The Good Web is any part of the internet built in good faith, which I mean in the specific, contractual sense. The maker is not optimizing against the user. No dark patterns. No retention schemes. No bloated scripts designed to keep you scrolling past the point of nourishment into the territory of compulsion. Nobody on a bbCode forum is selling your reading habits to an insurance company. The Good Web is not a technology, not a protocol, not even a community—though it contains all of those things. It's a disposition toward the person on the other end of the connection. It's the difference between a neighbour who bakes you bread and a supermarket that puts the bread at the back of the store because they know you'll buy chips on the way. Both are offering you something. Only one of them gives a shit whether you leave full.
More of this please.
You simply cannot breathe without seeing, hearing, or engaging in any kind of technical conversation about AI. AI has dominated the Zeitgeist so catastrophically that the only way to escape is to turn off the WiFi and delete all the apps. Every single piece of fucking software has some kind of shitty AI add-on, forced into your face at regular intervals whilst you’re trying to go about your life or do your job or check your email or write an email or read an email or talk to a human support agent or read a recipe or open an issue on an open-source project or watch a YouTube video or open your IDE or do a fucking internet search. The cognitive overload of AI trying to Make You More Productive™️ whilst you’re actually trying to be productive is so shockingly absurd. And yet, we are being made to feel like we are stagnating, being left behind, not good enough, that we are luddites should we not adopt this imposing technology. We are being told we’re missing out, even though we’re probably doing just fine. The technology is gaslighting us.
It's all opt-out (if you're lucky enough to be able to turn it off) too.
What’s even more personally terrifying: what if I need to find a new job in the near future? There are seemingly no non-generative-AI-centred options left for someone like me. I’m afraid that every opportunity will either be for a company building some kind of generative AI experience, or one that mandates the use of generative AI in your daily responsibilities, or one that refuses to use AI at the expense of their financial success and the stability of my employment. At this point I cannot escape. I am at the mercy of the profession I chose. I have a family to feed and a mortgage to pay. Retraining is not an option right now. I must force myself to adapt.
This was a fantastic conversation bringing in a bunch of the nuance that is so often missed.
No one was particularly optimizing for engagement or time-on-site or conversion. People made websites because they had something to say, or something to show, or just because they could. The web was weird and slow and full of bad tiled backgrounds, bad fonts and dumb ideas.
It was also weirdly, wildly, wonderfully human.
I almost feel like this is one benefit of Musk's takeover of Twitter/X. More people seem to have started building or resurrecting their own personal sites.
There are still people building the web by hand, very much like we did it in the early days. They know all about what's possible using modern tooling, yet they choose to expend their time and attention to the craft of doing it by hand. They care about the craft, and they care about what they're making. They believe in their unique skill and vision over engagement strategies and analytics and content algorithms. They don't need a platform, or they'll build their own.
But agentic coding is about more than moving upwards in abstraction. The compiler gave us abstraction without ambiguity. You wrote C, and it became assembly, deterministically. The layers were clean, and you remained a programmer in the traditional sense of how we’ve always understood the word.
What’s happening with agentic coding might better be captured by a term coined by Venkatesh Rao: “oozification.” Oozification, as Rao describes it, is the tendency of technological systems to evolve from structures built of large, rule-heavy building blocks to ones composed of smaller, more fluid, less constrained components.
Imagine, if you will, the difference between a man-made, plantation forest and a swamp. The forest has legible structure: tidy rows, canopy, understory, floor. The swamp is murkier, richer in evolutionary possibility, but also much harder to read. Oozification is the transformation of the forest into the swamp. The number of possibilities increases, while the number of certainties decreases, and that combination tends to make people downright nervous.
A natural language prompt doesn’t compile into code. Instead, it gets interpreted, completed and sometimes second-guessed by a probabilistic system. Intent blurs into elaboration and precise control gives way to fuzzy suggestion. It’s oozy and messy programming, and the role of the programmer blurs as well into something with unclear boundaries—part orchestrator, delegator, babysitter, designer, reviewer. People have always struggled to call software development honest-to-goodness “engineering,” and with the oozification of the practice, that highly-esteemed label has only become more ill-fitting.
Complexity looks smart. Not because it is, but because our systems are set up to reward it. And the incentive problem doesn’t start at promotion time. It starts before you even get the job.
I've learned so much over the years, and while it is always helpful to think about how features might be used in the future, it's even more helpful to know when to worry about it now, and when to leave it for later...if later ever comes.
The actual path to seniority isn’t learning more tools and patterns, but learning when not to use them. Anyone can add complexity. It takes experience and confidence to leave it out.
I've heard the term cognitive debt being bandied about. Having to deal with larger PRs, especially with a good deal of AI-generated code can be taxing.
This, I believe, is why reviewing AI-generated code feels so much more exhausting than reviewing a colleague's work. When a human pair submits code after a whiteboarding session, I am reviewing implementation against a design I already understand and agreed to. When AI generates code from a single prompt, I am simultaneously evaluating scope (did it build what I needed?), architecture (are the component boundaries right?), integration (does it fit our existing infrastructure?), contracts (are the interfaces correct?), and code quality (is the implementation clean?) — all at once, all entangled.
That is too many dimensions of judgment for a single pass. The brain is not built for it. Things get missed — not because I am careless, but because I am overloaded.
There’s no mention of how these tools are causing corporations to blow past their already tepid climate goal; no mention of how the affluent, surveillance-obsessed exec dictating its trajectory enthusiastically cozied up to fascists; no mention of how Elon Musk and Mark Zuckerberg’s data centers are funneling pollution directly into black neighborhoods; zero mention of the technofascist plan to leverage AI to decimate unions; no mention of the weird and precarious financial shell games powering the sector.
I don’t like using code that I haven’t written and understood myself. Sometimes its unavoidable. I use two JavaScript libraries on The Session. One for displaying interactive maps and another for generating sheet music. As dependencies go, they’re very good but I still don’t like the feeling of being dependant on anything I don’t fully understand.
I can’t stomach the idea of using npm to install client-side JavaScript (which then installs more JavaScript, which in turn is dependant on even more JavaScript). It gives me the heebie-jeebies. I’m kind of astonished that most front-end developers have normalised doing daily trust falls with their codebases.
I don’t think I’m as adverse to dependencies as Jeremy, but I’ve definitely shifted more to his views over the years. I’ll definitely write my own code more readily than I would’ve several years ago.