“If, as they say, AI tools are going to get better quickly, then let them do so and trust that smart people will pick them up and use them.” From @anildash
https://www.anildash.com/2025/04/19/ai-first-is-the-new-return-to-office/
“If you think your workers and colleagues are too stupid to recognize good tools that will help them do their jobs better, then... you are a bad leader and should step down. Because you've created a broken culture.”
From me: The more people lecture me about how I have to take LLMs seriously because if I’m missing out if I don’t, the less convinced I am that they’re worth taking seriously.
This follow-up post, also by @anildash, imagining a better world, is also excellent
https://www.anildash.com/2025/05/01/what-would-good-ai-look-like/
Even though I think I would prefer to frame this as what “good-enough” LLMs would look like. And acknowledging that clearly these LLMs are good enough for a lot of people and for a lot of their uses. However, they are not good enough for me, and long history suggests that I have reasonably common taste in these things.
Like, a good-enough LLM still needs to have a lying rate that is materially near-zero.
@kevinriggle proposing a new standard evaluation criteria for LLM systems, the Rate Of Flat-out Lying, or ROFL for short
@flaki, criterion, surely…