jdilla.xyz

This week’s token stream

2024-10-04

How to make millions as a professional whistleblower. What a weird and interesting career path.

It’s time to talk about America’s disorder problem. One of the things that stood out most to me when moving back to the US from Switzerland was the amount of disorder that we tolerated as a society. This tolerance for disorder might not be entirely bad — America is nothing without its weirdos — but I’m not sure we realize the degree to which it is a choice.

Inventing on principle. Fantastic and through provoking talk about what motivates innovation. It has me wondering what principles I can commit to in this way.

How to succeed at Mr. Beast Productions. Includes a great 101 description of how YouTube’s algorithim works + a lot of tenacity.

How I failed. The CEO of O’Reilly Media talks candidly about the biggest lessons he’s learned along the way. Rare to get this much candor in one of these.

Meta smart glasses lead to real time doxxing. I don’t see anyway we can expect to unrecognized in the future. Better to accept it.

This week's token stream

2024-09-09

How the psychiatric narrative hinders those who hear voices | Aeon Essays - an exploration of the “Targeted Individual” community, people who hear voices in their heads. Weird, wild, and a bit scary.

How to beat AI at Go - humans are able to beat the best AIs at Go by finding failure cases they aren’t prepared for. This is the future of warfare.

Palmer Lucky profile: such a great reminder that anything is possible with hard work and determination. Similarly, Casey Handmer on how entrepreneurship has changed the way he thinks.

Scaffolding

2024-08-14

I’m really coming to appreciate the value of scaffolding in product development.

What do I mean by scaffolding? The structure that allows you to build the product effectively.

Some examples:

  • A group of early customers willing to give fast, high quality feedback
  • Sample inputs and outputs that allow you to verify quality
  • Analytics or telemetry that give you an early indication of success or failure

You set up scaffolding to help you build. It doesn’t need to be pretty, but it needs to be fast, cheap, and effective. At the end of the project, you take it down. Or maybe you incorporate it into the structure of the product, improving it to make it fit for purpose.

Sometimes the scaffolding feels like a distraction. I’m going to build a whole separate structure just to help me build? Only if you want to build it well.

The best projects I’ve worked on outline the scaffolding early. These are the support structures we’ll need to do good work fast.

The AI that makes the AI

2024-08-13

One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used as aids to human scientists, e.g. for brainstorming ideas, writing code, or prediction tasks, they still conduct only a small part of the scientific process. This paper presents the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models to perform research independently and communicate their findings. We introduce The AI Scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation. In principle, this process can be repeated to iteratively develop ideas in an open-ended fashion, acting like the human scientific community. We demonstrate its versatility by applying it to three distinct subfields of machine learning: diffusion modeling, transformer-based language modeling, and learning dynamics. Each idea is implemented and developed into a full paper at a cost of less than $15 per paper. To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer. This approach signifies the beginning of a new era in scientific discovery in machine learning: bringing the transformative benefits of AI agents to the entire research process of AI itself, and taking us closer to a world where endless affordable creativity and innovation can be unleashed on the world's most challenging problems.

From The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery, full paper available here

I'm interested to read this one more closely and see the degree to which it does (or doesn't) rely upon having experiments that the LLM can execute without human intervention. Either way, an interesting result, but my hypothesis is that "places where the LLM can verify a result" is going to be the limiting factor.