title: LLMs do not understand anything description: >- Save this for the next time someone tells you that LLMs 'understand' things. pubDate: 2024-06-13 content: | LLMs do not understand what they are talking about. They just don't. It's not how they're built. They have a statistical model of language, not a semantic one. Philosophical puzzles about whether silicon can be conscious _do not arise_, because LLMs are _not even close to the right shape_ for having _anything like_ 'understanding'. If you don't believe me, there are plenty of examples out there on the Internet, but this is as good as any. It includes Chat GPT 4o explaining in detail why \~\~A → \~A (which is a classical contradiction) is trivially true in classical logic. It's even better given that I had explicitly asked it to explain why that sentence implies the trivial logic, not why it is trivially true. And even had the explanation not been complete garbage from beginning to end, it would only have shown that the sentence was _true_, not that it was trivial. In other words, the output: - Attempts to prove a contradiction (unprompted!) - Confuses the concepts 'truth' and 'triviality' - Is irrelevant to the prompt In case you want it handy, I'll put the full conversation down below. The good bit is at the end. The only edits I've made are to replace TeX syntax with Unicode. ---