Business

AI that can think: The rise of reasoning models that can mimic human logic

By Sanjana B

Copyright thehindubusinessline

AI that can think: The rise of reasoning models that can mimic human logic

As AI models evolve beyond mere fact recall, a new generation of reasoning models, such as OpenAI’s o1 and o3-mini, and DeepSeek-R1, is redefining how machines think. Unlike traditional knowledge models that store and retrieve information, reasoning systems analyze, infer, and connect data points logically, bringing machines a step closer to human-like problem-solving.

An AI knowledge model is primarily designed to store, organize, and retrieve vast amounts of factual or learned information, much like a sophisticated database that can surface relevant details when prompted.

Ganesh Gopalan, Co-Founder & CEO, Gnani.ai, shared that in contrast, a reasoning model goes beyond just retrieval. It interprets, analyzes, and draws logical conclusions from available data to solve problems, make predictions, or generate new insights.

“While knowledge models answer ‘what’ questions by recalling known facts, reasoning models tackle the ‘why’ and ‘how’s by applying inference, context understanding, and cause-and-effect relationships. Reasoning in AI mimics human cognitive processes, enabling the system to connect dots and derive meaning rather than just recalling information,” he explained.

Reasoning models focus on understanding relationships, context, and logic rather than merely storing and retrieving facts. While knowledge models are typically trained on large, structured datasets like encyclopedic text, documents, or databases to capture broad factual knowledge, reasoning models are trained on datasets designed for problem-solving, like mathematical reasoning, logical deduction, causal inference, and multi-step decision-making tasks.

For example, Fractal’s Fathom-R1-14B, a 14-billion-parameter reasoning language model, was tested on the notoriously difficult IIT-JEE’s math questions, and solved every problem to achieve a perfect score, according to Suraj Amonkar, Chief AI Research and Platforms Officer at Fractal.

Since they use techniques like chain-of-thought prompting or fine-tuning on reasoning benchmarks to simulate human-like thinking, reasoning models are generally more expensive to build and train, requiring higher computational power, specialized datasets, and longer training cycles to achieve accurate and interpretable results.

“Think of the primary difference as choosing between an expert with instant recall versus a strategist who methodically solves a problem. Standard LLMs are masters of instant recall and use rapid statistical pattern matching for tasks where speed and fluency are key, like generating marketing copy or summarizing text. The new frontier is ‘reasoning models,’ which are the strategists. They’re specifically trained to engage in an internal, multi-step ‘thinking’ process before answering,” Bharani Subramaniam, CTO, India and the Middle East, Thoughtworks said.

Rise of GenAI

While these models can be seen as complementary, the relationship between them has evolved with the rise of GenAI and GraphRAG architectures. Siddhant Agarwal, Developer Relations Lead for APAC at Neo4j, stated that Knowledge Graphs have been around for quite some time and have traditionally been used in environments that rely heavily on structured data.

However, with the advent of GenAI, they are taking on a larger role. These graphs now serve as the grounding layer that connects structured and unstructured information. The reasoning model builds on top, enabling systems to interpret context, derive insights, and connect facts across diverse data types. Reasoning models, therefore, are a natural evolution of how one uses knowledge, not a sequential step, where it extends the value of structured representation into more dynamic, context-driven applications.

For example, in healthcare, reasoning over patient data can help with likely diagnoses by connecting symptoms, history, and treatments. In finance, it can detect fraud by tracing relationships across transactions. In enterprise settings, it can connect project data from multiple systems to uncover dependencies or resource gaps.

“Say, in a business setting, your stakeholders want to understand how trends across different projects relate rather than just finding where a project is mentioned. A reasoning-enabled system can connect those dots to trace dependencies, summarize themes, and surface strategic insights. That’s the leap from retrieval to contextual understanding, an area where reasoning models truly shine,” Agarwal noted.

Reasoning models and hallucinations

Alongside, reasoning models can potentially reduce hallucinations by improving an AI’s ability to verify information through structured logic and multi-step reasoning rather than relying solely on pattern recognition. The experts highlighted that while reasoning alone doesn’t 100% eliminate hallucinations, it can significantly reduce them when paired with grounded knowledge. However, if not properly trained or grounded in reliable data, reasoning models can also produce hallucinations that sound more convincing, since their logical structure can make false information appear plausible.

Future AI systems are likely to merge knowledge and reasoning modules, creating hybrid architectures that combine factual grounding with logical inference. With such integration, AI can access accurate information and reason dynamically, leading to more reliable and context-aware outputs. Maintaining some modular distinction, where the knowledge component ensures accuracy and the reasoning component ensures logic, could help enhance transparency, control, and interpretability in complex AI systems.

Published on October 8, 2025