Watching people ask ChatGPT questions about technical matters for which I am a sort of expert and then presenting the hallucinations back to me as facts in real-time is a lot of fun. Does this happen to you?
I lifted the quote below from this bruising and well-deserved critique of GPT-5. The author of that post takes it from the original tweet here.

With LLMs it’s always the same problem. They don’t know the answer; they just know how to run an input sequence through complicated functions that predict the next word in an output sequence. The quote below from this excellent article puts it much better than I can.
