2 Comments
User's avatar
Frances's avatar

How do you know that they didn't hide it because the chain-of-thought reasoning is just another hallucination? Chain of thought reasoning in human language from LLMs isn't related to what they are actually doing. Because what they are actually doing is picking a statistically likely chain of tokens, not using logic or reasoning. They aren't thinking, they are regurgitating.

Expand full comment
Eff the Algorithm's avatar

It's true the “reasoning” isn’t reasoning in the human sense. But even if the chain-of-thought is just a plausible narrative stitched together from statistical guesses, it’s still a map of how the model got from the question to the answer. It’s useful precisely because it exposes where those guesses go off the rails. When you hide that map, you take away one of the few tools humans have for spotting errors, bias, or deliberate nudges before they’re baked into the final answer.

Expand full comment