I think Cory #Doctorow @pluralistic is missing the mark a bit. People misunderstanding what an #LLM is doesn't mean "AI" is overhyped.
Those who curate the best-guess timelines until we reach #AGI (that's "human-like" AI) has mostly moved their predictions forward since #ChatGPT and #StableDiffusion. One of the reasons would be that they show us that _human_ behavior might be simpler than we thought. Thus easier to reach.
ChatGPT makes up - and defends - "facts" in about the same way as my 7yo. Stable Diffusion recreates a world from tiny pieces of information in a similar way to how our brains make up (!) what we see.
Fast forward a few years from that and we will be able to sit down and prompt a movie for the evening.
(If we allow slavery of human-like intelligences)