Gaissmaier and Gigerenzer found that Americans flew less and drove more in the year after 9/11, which led to 1,600 more traffic deaths over that period than would otherwise have been expected.
It pains me to see the warm vineyards and villages on the sunny side of the main Valais valley on the north side of Lake Geneva and the heavily-farmed sunny slopes of the north side of the Anterior Rhine Valley in the shade, while the wooded slopes on the shady side are bathed in blazing sunlight.
A somewhat trivial topic: Swiss relief maps show the sunlight as coming from the Northwest when in real life it comes most often from the South. The article is beautifully rendered. Something about it transports me to summertime in the Alps.
The culprit? Most artists draw with their right hands and write from left to right and so European maps tend to show shade on the right-hand side of the map. Via The Browser.
It reminded me of #49 in my list of 52 things I learned in 2022: If a married woman is diagnosed with a brain tumor, there is a 21% chance that the couple will divorce; if the husband has a tumor, there is only a 3% chance they will divorce, which I found via Rob Henderson.
Based on some googling, I don't think this is the exact same study, but in the spirit of intellectual honesty, I figured I should post it.
There is some nuance, but the general relationship between illness and husbands divorcing their wives no longer holds.
Congratulations to I-Fen Lin and Susan Brown, who found the error, and Amelia Karraker who handled the correction with dignity.
New to me this week is Dean Bog who does 15 minute videos on various neighborhoods around the city of Pittsburgh.
If you have any interest at all in the city, I suggest you check them out. The Bloomfield video (I think his first one) is a good one to start with, but they're all worthwhile. As someone who grew up in Pittsburgh, I'm amazed at how much they teach me, both about the "facts" of the city and it's culture.
One thing Dean points out is that the city neighborhoods have a distinct feel because of the geography. The hills and rivers mean that two neighborhoods that are side by side on a map may not have any actual connections between each other and so can evolve totally differently.
I found out about Dean via this City Cast Pittsburgh episode where he talks about his process. I'm paraphrasing here, but one thing he does is basically walk around and talk to people until they introduce him to the unofficial mayor of the area, who tells the story of the place.
I found Phind via Marginal Revolution and Tyler Cowen's recommendation. Overall, I found it to be close to ChatGPT 4, if not slightly better â and free!
I decided to try it out because the moment last year when it seemed like OpenAI might implode reminded me again how reliant I am on ChatGPT, especially for programming.
As I've written before, I don't really program. Instead, I scope and test. My typical workflow looks something like this:
Overall project definition: I start by asking ChatGPT: I want to build a search feature for my blog that finds the best posts for a given query. What components would go into that?
Based on what I get back, I ask questions or refine the scope. Frequently there are features I can remove or requirements I've forgotten.
Eventually we end up with a set of components we need to build: A search bar in the UI, a results page, an API that takes the query and searches the database (I don't know what actually goes here, I haven't done this yet).
Building begins. I ask ChatGPT to get very specific: write the API that is going to query my database of blogposts for me. Often in this step I give ChatGPT context from other parts of my app (e.g., here is my database schema).
I then take the code as ChatGPT has written it and begin to test it. If ChatGPT asks me to install a library, I check that it exists and seems legit first, but beyond that, I use the code ChatGPT has given me.
It never works the first time! In the process, I do a lot of debugging with ChatGPT, copying and pasting in error messages and seeing what I get back.
Eventually it works; in the process, 95% or more of the keystrokes in the code have come from ChatGPT.
This morning, I tried doing this with Phind.
In terms of overall quality, I found Phind to be in line with ChatGPT 4. I didn't side by side test it, but in the past I've been able to feel pretty quickly when I'm accidentally working with ChatGPT 3.5. I didn't feel this difference working with Phind; if anything, it seemed to have slightly higher quality results for coding tasks.
Here are some of the things I liked:
They have an extra place where you can add context. I found this to be super useful, especially bringing Phind in to a project that I've already been coding on for sometime.
The model is also willing to ask you for extra context where it might be helpful in a way that seems like it improves my overall performance.
Their model responses are more skimmable than the ChatGPT equivalent. Little things like giving some styling to filenames help me move quickly through what I'm getting back.
When their model searches for contextual data, it's much less intrusive than when ChatGPT does the same thing. Once ChatGPT starts searching the internet, it seems to only focus on what it finds there and the extra time / context often doesn't improve what I get back; in fact, I find myself turning it off. With Phind, I didn't even notice that it was seeking out extra information from places like Stackoverflow at first - it just incorporated it into the results.
So why do I say they haven't quite nailed it? It's not clear to me as a user how I'm supposed to use these various fields. I can tell that they're useful and I'm fine with guessing as I go, but I wish they gave me more of a model for how to help them. It would also be helpful to be able to pin or save context (e.g., my directory structure).
So my final verdict, at least after one morning: Pfind is a credible peer for ChatGPT4 for coding tasks.