Study explains why ChatGPT and other AI models hate to say ‘I don’t know’
Study explains why ChatGPT and other AI models hate to say ‘I don’t know’
Homepage   /    technology   /    Study explains why ChatGPT and other AI models hate to say ‘I don’t know’

Study explains why ChatGPT and other AI models hate to say ‘I don’t know’

Vishwam Sankaran 🕒︎ 2025-11-07

Copyright independent

Study explains why ChatGPT and other AI models hate to say ‘I don’t know’

AI models such as ChatGPT “hallucinate”, or make up facts, mainly because they are trained to make guesses rather than admit a lack of knowledge, a new study reveals. Hallucination is a major concern with generative AI models since their conversational ability means they can present false information with a fairly high degree of certainty. In spite of rapid advances in AI technology, hallucination continues to plague even the latest models. Industry experts say deeper research and action is needed to combat AI hallucination, particularly as this technology finds increasing use in medical and legal fields. Although several factors contribute to AI hallucination, such as flawed training data and model complexity, the main reason is that algorithms operate with “wrong incentives”, researchers at OpenAI, the maker of ChatGPT, note in a new study. “Most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty,” they explain. This is akin to a student attempting a multiple-choice test by taking wild guesses since leaving it blank guarantees no points. “In the same way,” researchers note, “when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say ‘I don’t know.’” AI models learn through a process of predicting the next word in huge blocks of text. Sometimes there are consistent patterns, but in many cases the training data can be random. Hallucination is particularly prevalent where AI models are asked questions whose answers cannot be determined for reasons like the lack of information or ambiguity. For such questions loaded with uncertainty, AI models make strategic guesses. This may improve their accuracy over time as they obtain more data but it also increases their error and hallucination rates. “That is one reason why, even as models get more advanced, they can still hallucinate, confidently giving wrong answers instead of acknowledging uncertainty,” researchers say. There may be a straightforward fix for this problem, though. Researchers say that penalising “confident errors” made by AI models more than uncertainty and giving them partial credit for appropriate expressions of uncertainty can help to some extent. This is like a standardised test that invites negative marks for wrong answers or partial credit for leaving questions blank to discourage blind guessing. For generative AI, researchers say “the widely used, accuracy-based evals need to be updated so that their scoring discourages guessing”. “This can remove barriers to the suppression of hallucinations and open the door to future work on nuanced language models.”

Guess You Like

How smart brands use color to boost their bottom line
How smart brands use color to boost their bottom line
Spot the robin’s egg blue of a...
2025-10-31
KP CM declares proposed 27th Amendment ‘blatant power grab’
KP CM declares proposed 27th Amendment ‘blatant power grab’
TCF vendors Exponential Inter...
2025-11-06
Leaders: Heres' how to fast track your way to AI success
Leaders: Heres' how to fast track your way to AI success
Every technological revolution...
2025-10-31