LLMs Are Not Reasoning—They’re Simply Actually Good at Planning

Giant language fashions (LLMs) like OpenAI’s o3, Google’s Gemini 2.0, and DeepSeek’s R1 have proven outstanding progress in tackling complicated issues, producing human-like textual content, and even writing code with precision. These superior LLMs are sometimes referred as “reasoning fashions” for his or her outstanding skills to research and remedy complicated issues. However do these fashions really purpose, or are they only exceptionally good at planning? This distinction is refined but profound, and it has main implications for a way we perceive the capabilities and limitations of LLMs.

To grasp this distinction, let’s evaluate two situations:

  • Reasoning: A detective investigating a criminal offense should piece collectively conflicting proof, deduce which of them are false, and arrive at a conclusion primarily based on restricted proof. This course of entails inference, contradiction decision, and summary considering.
  • Planning: A chess participant calculating the most effective sequence of strikes to checkmate their opponent.

Whereas each processes contain a number of steps, the detective engages in deep reasoning to make inferences, consider contradictions, and apply basic rules to a selected case. The chess participant, however, is primarily partaking in planning, choosing an optimum sequence of strikes to win the sport. LLMs, as we are going to see, operate way more just like the chess participant than the detective.

Understanding the Distinction: Reasoning vs. Planning

To comprehend why LLMs are good at planning slightly than reasoning, you will need to first perceive the distinction between each phrases. Reasoning is the method of deriving new conclusions from given premises utilizing logic and inference. It entails figuring out and correcting inconsistencies, producing novel insights slightly than simply offering data, making choices in ambiguous conditions, and interesting in causal understanding and counterfactual considering like “What if?” situations.

Planning, however, focuses on structuring a sequence of actions to realize a selected objective. It depends on breaking complicated duties into smaller steps, following recognized problem-solving methods, adapting beforehand discovered patterns to related issues, and executing structured sequences slightly than deriving new insights. Whereas each reasoning and planning contain step-by-step processing, reasoning requires deeper abstraction and inference, whereas planning follows established procedures with out producing essentially new information.

How LLMs Method “Reasoning”

Trendy LLMs, akin to OpenAI’s o3 and DeepSeek-R1, are outfitted with a way, often called Chain-of-Thought (CoT) reasoning, to enhance their problem-solving skills. This methodology encourages fashions to interrupt issues down into intermediate steps, mimicking the best way people assume by an issue logically. To see the way it works, think about a simple arithmetic drawback:

If a retailer sells apples for $2 every however provides a reduction of $1 per apple in the event you purchase greater than 5 apples, how a lot would 7 apples price?

A typical LLM utilizing CoT prompting may remedy it like this:

  1. Decide the common worth: 7 * $2 = $14.
  2. Determine that the low cost applies (since 7 > 5).
  3. Compute the low cost: 7 * $1 = $7.
  4. Subtract the low cost from the full: $14 – $7 = $7.

By explicitly laying out a sequence of steps, the mannequin minimizes the possibility of errors that come up from making an attempt to foretell a solution in a single go. Whereas this step-by-step breakdown makes LLMs seem like reasoning, it’s primarily a type of structured problem-solving, very like following a step-by-step recipe. Alternatively, a real reasoning course of may acknowledge a basic rule: If the low cost applies past 5 apples, then each apple prices $1. A human can infer such a rule instantly, however an LLM can not because it merely follows a structured sequence of calculations.

Why Chain-of-thought is Planning, Not Reasoning

Whereas Chain-of-Thought (CoT) has improved LLMs’ efficiency on logic-oriented duties like math phrase issues and coding challenges, it doesn’t contain real logical reasoning. It is because, CoT follows procedural information, counting on structured steps slightly than producing novel insights. It lacks a real understanding of causality and summary relationships, which means the mannequin doesn’t interact in counterfactual considering or think about hypothetical conditions that require instinct past seen knowledge. Moreover, CoT can not essentially change its method past the patterns it has been skilled on, limiting its capability to purpose creatively or adapt in unfamiliar situations.

What Would It Take for LLMs to Turn out to be True Reasoning Machines?

So, what do LLMs want to actually purpose like people? Listed below are some key areas the place they require enchancment and potential approaches to realize it:

  1. Symbolic Understanding: People purpose by manipulating summary symbols and relationships. LLMs, nevertheless, lack a real symbolic reasoning mechanism. Integrating symbolic AI or hybrid fashions that mix neural networks with formal logic techniques might improve their capability to interact in true reasoning.
  2. Causal Inference: True reasoning requires understanding trigger and impact, not simply statistical correlations. A mannequin that causes should infer underlying rules from knowledge slightly than merely predicting the subsequent token. Analysis into causal AI, which explicitly fashions cause-and-effect relationships, might assist LLMs transition from planning to reasoning.
  3. Self-Reflection and Metacognition: People always consider their very own thought processes by asking “Does this conclusion make sense?” LLMs, however, do not need a mechanism for self-reflection. Constructing fashions that may critically consider their very own outputs could be a step towards true reasoning.
  4. Frequent Sense and Instinct: Regardless that LLMs have entry to huge quantities of data, they usually wrestle with fundamental commonsense reasoning. This occurs as a result of they don’t have real-world experiences to form their instinct, and so they can’t simply acknowledge the absurdities that people would decide up on instantly. Additionally they lack a option to deliver real-world dynamics into their decision-making. A method to enhance this could possibly be by constructing a mannequin with a commonsense engine, which could contain integrating real-world sensory enter or utilizing information graphs to assist the mannequin higher perceive the world the best way people do.
  5. Counterfactual Considering: Human reasoning usually entails asking, “What if issues had been totally different?” LLMs wrestle with these sorts of “what if” situations as a result of they’re restricted by the information they’ve been skilled on. For fashions to assume extra like people in these conditions, they would want to simulate hypothetical situations and perceive how modifications in variables can impression outcomes. They’d additionally want a option to check totally different potentialities and provide you with new insights, slightly than simply predicting primarily based on what they’ve already seen. With out these skills, LLMs cannot actually think about various futures—they will solely work with what they’ve discovered.

Conclusion

Whereas LLMs could seem to purpose, they’re really counting on planning strategies for fixing complicated issues. Whether or not fixing a math drawback or partaking in logical deduction, they’re primarily organizing recognized patterns in a structured method slightly than deeply understanding the rules behind them. This distinction is essential in AI analysis as a result of if we mistake subtle planning for real reasoning, we danger overestimating AI’s true capabilities.

The street to true reasoning AI would require elementary developments past token prediction and probabilistic planning. It should demand breakthroughs in symbolic logic, causal understanding, and metacognition. Till then, LLMs will stay highly effective instruments for structured problem-solving, however they won’t actually assume in the best way people do.