jdilla.xyz

Good tokens 2025-08-21

2025-08-22

Things I learned

A group of kangaroos is called a mob. A group of jaguars is a shadow.

Worth your time

Adding an age check reduced online porn traffic in the UK by 47%. Whether or not you believe it is right, I believe it is unlikely that ten years from now people will be unable to get porn online without verifying their age in some way.

What kids say about getting off their phones. Freedom is the killer app.

Zhengdongwang on whether or not AI is a normal technology. Zhengdong’s 2024 letter and productivity advice are some of my favorite recent pieces of internet writing.

Devon Zuegel on how to build a new town. I want to do this someday.

I’m in love with the conspiracy theory that the terra cotta warriors are fake. I don’t believe it is true, but I love going down the rabbit hole. Someone make the definitive YouTube video on this!

Disposable delivery drones are a thing.

Product / Market Experiments:ā€œExperimentation is a skill developed via learning-by-doing, and angels have a skill advantage in that domain because of having more operational experienceā€. Filed under ā€œwe all experiment too little.ā€

Rooting for Austin Vernon.

Under the hood with Claude Code. Also, AI coding agents and IDEs ranked.

Musings

Be the person who writes things down.

Life should be much rougher.

Most of your audience never reads what you wrote. They are told about it by someone who read it. Told to me by patio11.

Some Kipling:

Ancient, effortless, ordered, cycle on cycle set,

Life so long untroubled, that ye who inherit forget

It was not made with the mountains, it is not one with the deep.

Men, not gods, devised it. Men, not gods, must keep.

How do the rhythms of work change when anyone can build a proposed product change? What software is needed to support this?

The most important skill.

The Social History of the Code Machine

2025-08-15

I’ve been reading The Social History of the Machine Gun, which tells the story of the introduction and adoption of automatic weaponry to the battlefield.1 I’m not really a gun person, but I found it fascinating because it is a real life story of how a new technology challenges the values and assumptions of people and institutions. The life and death stakes add weight to the resistance of key leaders to adapt to the implications of the new technology. It caused me to reflect on how AI is changing software development and gave me some practical ideas on how teams and people should be adapting to get the most of the technology.

No play to the pulses

What is about to follow greatly simplifies large periods of military history, but I believe is a directionally correct description of John Ellis’s central argument.

Prior to the deployment of the Gatling Gun, the decisive charge was the center of most military operations. The goal of an army was to time their decisive charge to overwhelm their opponents, break their lines, and take the field. This is how Napoleon fought and not altogether different from how Julius Caesar fought.

As guns — muskets and cannons — were introduced to the battlefield, they were introduced in service of the decisive charge (again, radically simplifying). The purpose of lining up lots of men in well ordered lines and firing muskets was to concentrate enough firepower to soften up the enemy ahead for the bayonet charge to come. So central was the decisive charge to battlefield tactics that one late 19th century British Army Captain was quoted in the book as saying that ā€œguns were not as a rule made for actual warfare, but for show.ā€ 2

The machine gun, starting with the Gatling gun, but later the Maxim and Browning guns changed everything. Different guns have different levels of performance, but one primary source in the book notes that an early machine gun allowed a single soldier to concentrate 40x as much firepower compared to existing methods. Furthermore, this firing speed was reliable; it was the same for new recruits as it was for highly drilled veterans.

Over the following fifty years, in fits and spurts, the ability to concentrate firepower begins to change warfare. At first, machine guns are primarily used in defensive contexts. There is ample evidence in colonial conflicts that charges are useless against them, even in (previously) overwhelming numbers. Then in the Russo-Japanese war, the Japanese pioneered the use of covering fire to execute offensive maneuvers.

Despite these examples, militaries around the world are reluctant to take the evidence in front of them to its logical conclusion and reorganize around the new weapon. As late as 1915, the British Army is placing heavy emphasis on bayonet training and telling its soldiers: ā€œThe bayonet… is the ultimate weapon in battle.ā€ In Ellis’s view, it is the machine gun more than anything else that causes the First World War to turn into a war of attrition and it’s only after the war that a true reimagining of tactics begins.

So why were militaries so slow to adopt new technology when the stakes were so high? Ellis makes a persuasive argument that adoption of the machine guns and the tactics enabled by them was hindered by the values of military leaders and the institutions they maintained. One quote from the book in reaction to a demonstration of the Gatling gun: ā€œBut soldiers do not fancy it… it is so foreign to the old familiar action of battle — that sitting behind a steel blinder and turning a crank —that enthusiasm dies out; there is no play to the pulses; it does not seem like soldiers work.ā€

The new weaponry and the changes in tactics required conflicted with their sense of what it meant to be a good soldier. They couldn’t let go of orderly lines and courageous charges, even under pain of death.

What is our work?

I’m not a military historian, but I am a software creator. While reading this book, I’ve been thinking about AI in general and software development in particular. For at least the last 15 years (my entire career), the assumption has been that code is expensive to create and must be done with extreme care… and that isn’t the case anymore.

It’s easy from the perspective of 2025 to look back at the military elites of the 1890s with their uniforms and funny facial hair and laugh at how backwards they are. I struggled at times to fully believe the stories in the book. Who has such an emotional attachment to how a victory is won?

It’s harder to realize that these were accomplished, intelligent, competent men who had these habits drilled into them and who had literal victories to their names. The values that made them successful had become second nature to them and natures are hard to change.

So how can we learn from their experience?

If I took one thing away from this book it’s that our values bleed into our work. Timeless values like remaining disciplined under pressure are expressed in actions like marching in a straight line and we become attached to those actions rather than the values. When technology changes those actions, it feels viscerally wrong to us. I see a lot of this in the discussion around vibe coding. We should be prepared for this feeling and seek to be curious rather than judgmental. It’s never a bad time to reflect upon your essential values!

A second take away was the interaction between values, tactics, organizational design, and training. Unlocking the power of the machine gun required changes in:

  • Values (e.g., the understanding of what made a good soldier)

  • Tactics (e.g., machine guns are used differently than other weapons)

  • Organizational design (e.g., increasing the number of machine gunners in a unit)

  • Training (e.g., giving units time and resources to master the new technology)

To be effective, these changes had to happen together. This should make intuitive sense. Changing your tactics will be ineffective if you aren’t trained on the tools you’re using and you’ll never invest the time in training on something you don’t value.

At the margin, all of us probably experiment too little, but this is even more true now. Throughout the entire book, there was only one anecdote I can remember of a unit overestimating the capabilities of a machine gun and hundreds of people who underestimated it. Often there were pockets of experimentation from outsiders or units operating in atypical circumstances, like the previously mentioned British colonial and Japanese units. Central commands were quick to discount these experiences rather than seeking to understand them.

How might the future look?

Taking my own advice, here’s a proposal for what the software team of the future looks like:

  • Using an agent, (virtually) everyone in the organization has the ability to code, proposing changes to the product. Sales, customer support, marketing operations and more are all attempting to improve the product.

  • This may even extend to people outside the formal organization — for instance, customers may be given the ability to propose product changes that first go live only on their account and then are adopted more broadly.

  • A relatively smaller set of people are tasked with managing the scalability, design, and strategy of the product. They’re reviewing working prototypes and thinking about the second order implications, a blend of executives and hands-in-the-code architects, designers, and PMs.

  • Experimentation with these prototypes becomes much, much more common. New ways of starting, assessing, and sunsetting experiments are needed.

  • All of this will be heavily mediated by AI agents that both improve the output of the ā€œnon-technicalā€ team and give leverage to the keepers of product quality.

  • Despite heavy use of AI, attention to detail and the ability to get into the weeds to make something great will continue to be prized — if anything, it may become even more important.

All-in-all, it becomes more like a well maintained and opinionated open source project than the standard ā€œthree-in-a-boxā€ PM / Designer / Engineering lead.



  1. Shoutout to Jordan Schneider whose essential ChinaTalk podcastbrought this to my attention 

  2. Ellis does note that this was an extreme position, but the Captain in question was an advisor to Hiram Maxim, one of the early machine gun innovators. 

Good tokens 2025-08-08

2025-08-08

Worth your time

  1. The Medium CEO on the turnaround he has led over there.
  2. Yoram Hazony on Ezra Klein. I appreciated the way both Ezra and Yoram sought to understand each other. After it, I was sold by Yoram that we need a national narrative for why the United States should exist that most people buy in on, but couldn’t understand why he supports the people he does in creating that narrative.
  3. Balaji’s 10 AI thoughts. I disagree that AI is better for front end than backend and not sure I see where he’s going on crypto (which is a theme with me and Balaji’s work), but I really enjoyed these.
  4. 100 years of Art Deco. Reminder that America was supposed to be Art Deco.
  5. CEO of Resend says his sign up data shows we have a new definition of what a developer is.

Musings

I spent this week improving my HeyRecap agent. Some of the biggest benefits in performance I saw came from improving the tools that my agent had access to. The intelligence of the model was almost never the limiting factor for my use case; instead, flawed design of my search / read features was leading to bad output. Building agents means building tools for agents. More to come on this.

The optimal UI for AI collaboration are coactive surfaces where both the human and the AI are writing and reading from the same set of materials.

I think David Shor is wrong about reading and writing. I do think video is going to raise in importance, but in the age of AI, reading and writing closely is going to be even more important.

Middlemarch Book Notes

2025-07-31

The first book in ~2.5 years to crack my booklist. This book has more character voices than any book I’ve read in a long time. I can hear so many of them: Dorothea, Celia, Fred, Caleb, Mary, and especially Mr. Brooke in my head.

A rebuke to the idea that cellphones ruin the plot of movies. Almost all of the driving conflicts in this story are about the things people cannot bring themselves to say. Dorothea reminds me of George Bailey: ā€œThe growing good of the world is partly dependent on unhistoric acts; and that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.ā€

I was blown away by the self reliance of the main characters. They are so reluctant to blame others for their problems and/or lash out at them. I wonder if this reflects how society has changed?

I expect someday I’ll read it again. See also Mark’s ā€˜reflections on it.

Good tokens 2025-07-30

2025-07-30

Inspiration

That became my yardstick: I’d ask, ā€œIs this dish good enough to come downtown and wait in line for? If not, it’s not what we’re after.ā€ A chef can go years before getting another dish like that. We’ve been lucky: Hits have come at the least expected time and place. I’ve spent weeks on one dish that ultimately very few people would care about. And then I’ve spent 15 minutes on something that ends up flooring people like the pork bun.

David Chang on strange loops and food. ā€œIs this good enough to come downtown and wait in line for?ā€ is going to be my measuring stick for all future projects.

Things I learned

  1. In Switzerland, you are never more than 16km from a lake. About Switzerland.
  2. The English Monarchy didn’t formally release their claim on the English throne, originating with Edward III, until 1801 — after Napoleon had become dictator. The Rest Is History.
  3. A 2019 survey of 2,000 American adults found that 79% had made at least one drunk purchase and that they averaged $444 in drunk purchases per year. The Hustle.
  4. International adoptions in the US are down 94% since the peak in 2004. Pew

Worth your time

  1. Cate Hall and Patrick McKenzie on agency. Some notes for me: be willing to go places others won’t and do things others won’t do, including looking stupid and taking hard feedback. More from Cate here.
  2. Ben Reinhardt on Fat Ideas and False Negatives.
  3. A v0 friction log. I’m increasingly convinced that all these vibe coding tools are collapsing into a single hyper competitive category.
  4. How to achieve victory in Ukraine and the future of cheap UAVs
  5. The Bitter Lesson and the Garbage Can. This has me wondering what makes AI a research problem rather than an engineering problem?
  6. The Electric Tech Stack
  7. I’ve officially built an AI agent. And my HeyRecap build in public document.