software development

🪦 Project Graveyard: Papagei Terminal 🦜

2025-11-06

I’m trying to get better at building in public and at celebrating projects that end up as dead ends. Take this in that spirit.

What it was

Papagei Terminal allowed a user to spin up virtual machines like they were slack channels to make it easier to use >1 Claude Code instance at the same time and make it easier to use Claude in --dangerously-skip-permissions mode.

Here is an early prototype. Future versions were way better!

What went well

I think I had a really clear idea of who this was for and the need that it was serving.

This is by far the most ambitious technical project I’ve built. I was able to use it to make meaningful code changes across several projects. I learned a bunch about working with AWS and with agents.

I actually got to the point where a tool I built was able to make code changes to other projects. That was really motivating.

Why it didn’t work / why I’m shutting it down

At the beginning of the summer, there really wasn’t a product that allowed you to use more than one Claude Code at the same time without putting real effort into understanding Git Worktrees. Conductor was experimenting here, but it was all local.

Between May when I started working on this seriously and when I got it to the point where I was really starting to enjoy using it, everyone launched a version of this and I was no longer convinced that I had something unique to bring here.

What I learned / what I would do differently next time

Realistically I probably started building this too late. I also don’t think I am embedded enough in the community of software developers to get a following here.

I wasn’t active enough in recruiting early users.

Supra Podcast: How AI is changing the future of software development

2025-10-28

I got to join Marc and Ben from the Supra podcast to talk about how AI is changing how software teams operate.

Three things I took away from this conversation.

First, is that there are some things that AI doesn’t change. At the end of the day, you’ve still got to define the problem, define the approach, define the details. AI changes the tools, the artifacts, and the process, but it doesn’t change the basic facts of problem solving.

Second is AI is changing how software is made at three levels simultaneously: individuals, teams, and organizations. Individuals are trying out tools (e.g., Claude Code) and putting them into their workflow. Then there are some teams that are starting to adopt some of these tools en masse and reorganize their processes around them. Finally, there are organizations that are trying to figure out what all of this means for the “standard” way of working and shipping software.

To get this right, organizations need to be willing to change across 4 dimensions:

  • Tools - What are the tools that are available to us? What are their benefits and limitations?

  • Tactics - How do we coordinate with these tools to achieve a result? What are the artifacts that are created? What is the size and roles of people on the team?

  • Training - How do we build competence on these new tools and tactics? How do we give people space, opportunity, and resources to experiment?

  • Values - What does great work look like? What is important and celebrated?

Without all of these working together, organizations will fail to get value out of a transformative technology — and I have to be honest, now is a moment where I’d rather be at a small company experimenting with new ways of working than at a large company where I have to be concerned about how this works at scale.

You can find the whole episode here:

  • Spotify: https://open.spotify.com/episode/7nozDSwSk3fuAK4TQWxm5l?si=oA0qIwIJShqTCjMOLFFC0Q

  • Apple: https://podcasts.apple.com/us/podcast/81-i-spent-3-months-at-an-ai-native-startup-where/id1737704130?i=1000733726676

  • YouTube: https://youtu.be/GbOw8_JViPA

  • Substack: https://suprainsider.substack.com/p/81-i-spent-3-months-at-an-ai-native

Mike Judge asks good questions about AI shovelware

2025-10-09

Mike Judge has a great piece poking at the AI hype where he asks essentially, “If these tools are so great, where is the explosion of AI created stuff in the world?”

The whole piece is worth a read, but one of the most interesting things to me about it is the data he brings to bear on the question.

He looks at:

  1. iOS app releases
  2. Android releases
  3. Domain registrations
  4. Steam releases
  5. Public GitHub repos created

And then concludes from these that AI coding tools are “bullshit” ending with the call for people who claim that they are now 10x software engineers because of AI, to show the receipts.

First, I want to concentrate on what I love about this. “If this is so great, where is it in the data?” is absolutely the right question to be asking.

And there is definitely a dog that isn’t barking here. The data that he cites aren’t perfect (more on this in a second) and yet really impactful things tend to move really obvious metrics. The gains in life expectancy between 1870 and 1970 are really easy to see.

So on one hand, I love this challenge. On the other, I think he goes too far in calling it all bullshit and saying that it doesn’t work.

I’ll hold myself out as the example that Mike asks for. While I won’t claim to be a 10x engineer, I had never completed a meaningful software project in production before GPT-4 launched. Since then, I’ve coded this blog / portfolio site myself, launched an AI based local news site that has hundreds of weekly readers, and I have a third unreleased prototype that I think could be a real product.

Analyzing myself against Mike’s charts:

  1. I haven’t launched an iOS app
  2. I haven’t launched an Android app
  3. I have bought 3 domains
  4. I haven’t launched a steam app
  5. I’ve created 1 public repo, unrelated to my AI coding work

Now I’m open to the idea that I’m the exception rather than the rule… but I also too humble to think that I’m a unicorn on this dimension.

There’s plenty of room for middle ground here. It’s totally possible that: 1. AI tools are net negative for most software engineers 2. AI tools are transformative for people like me 3. People like me are a minority

Intuitively, I doubt that this is true and yet I don’t have hard data beyond my personal experience to bring to bear on this question. It’s certainly something I’ll be thinking about over the coming months. A more likely explanation in my view is that we haven’t unlocked the right combination of values, tactics, organizational design, and training to unlock AI software productivity at scale… but I can’t prove that at this point.

Worth a ponder.