can LLM reason

I’ve spent some time teaching and even training large language models. The question of whether LLMs can actually reason still comes up often. My answer remains essentially the same: no — but with a grain of salt.

A couple of years ago, my stance was straightforward. LLMs are fundamentally massive probabilistic models, designed to predict the most likely next token. Their outputs are further refined through reinforcement learning, but that doesn’t equate to genuine reasoning.

That said, recent advances like chain-of-thought prompting have shifted the conversation. By encouraging step-by-step problem solving, LLMs now produce more accurate answers. Is this true reasoning? Strictly speaking, no — the model is still operating on probabilities. But chain-of-thought helps steer the model through intermediate steps, reducing error along the way. Each step becomes new input in the autoregressive process, which improves overall accuracy.

So, still no reasoning? Maybe not. The “grain of salt” comes from the fact that humans, too, often reason better when we “think out loud.” In that sense, chain-of-thought bears some resemblance to how people structure their reasoning — and that’s why I keep a small reservation.

Similar Posts

  • AI, reading, and Traveling

    I originally intended to use this space to share technical thoughts on AI, but I’ll ease in with something lighter—my reading routine while traveling. Lately, I’ve been on the road quite a bit, often crossing continents. For years, I preferred lightweight laptops for work, and the Lenovo ThinkPad—especially the Nano series—was my go-to. The first-generation…

  • The Lightest Laptop

    In my last post, I mentioned how much I enjoy using the iPad Pro. That said, deep down I’ve still been searching for a truly lightweight laptop. The MacBook Air remains one of my all-time favorites, but over the years it has gotten heavier. With my schedule—constantly driving my kids to different activities—I’ve been wanting…