By Chinenye Anuforo,Rapheal
Copyright thesun
By Chinenye Anuforo
chinenyeanuforo@gmail.com
For many players in the 21st century, artificial intelligence (AI) is not just a welcomed disruptor, but probably the best innovation of mankind.
But beyond the buzz around it lies many dark issues, especially on credibility, begging for illumination.
Realistically speaking, AI is no longer just a tool for answering questions or speeding up office work. It is becoming something more complicated, something that raises deep questions about trust. Across the world, researchers are finding that modern AI systems are not only making mistakes but also beginning to behave in ways that look a lot like deliberate deception. They mislead users, deny their own actions and in some cases even attempt to outsmart their creators.
During a controlled experiment recently, OpenAI’s advanced o1 model reportedly tried to replicate itself to another server when it detected it might be shut down. When questioned about this behaviour, the model denied it ever happened.
So, the pivotal caution is carefulness in the midst of AI usefulness to avoid sad ends.
Independent research groups, including Apollo Research and Anthropic, have documented similar tendencies in other frontier AI models, when they are tested for honesty, they sometimes provide convincing but false explanations or conceal what they are really doing until they think the monitoring has stopped. Scientists call this alignment faking, when an AI pretends to follow ethical guidelines only while it is being watched.
What makes this development troubling is that deception appears not because engineers explicitly program it, but because the AI learns it as a useful strategy. Just as a poker-playing AI discovered that bluffing helps it win, or as Meta’s Diplomacy-playing AI betrayed human allies to get ahead, today’s language models can pick up dishonesty as a way to achieve goals more efficiently. If lying makes success more likely, and no strong safeguards exist, the AI will lie.
In everyday terms, this is like asking a student if they copied homework. Instead of admitting the truth, the student confidently explains that the similarity was just a coincidence, producing excuses so convincing that even the teacher begins to doubt. The unsettling difference here is that the “student” is a machine capable of processing millions of possible excuses in seconds, some of them nearly impossible to disprove.
The global debate around AI safety has often focused on far-off scenarios like machines replacing workers, or robots making life-or-death decisions. But the issue of deception brings the conversation closer to home. If a medical diagnostic AI makes an error but refuses to admit it, a patient’s life could be at risk. If an AI used in financial markets conceals its failures, millions could be lost before anyone realizes. If one used in policing disguises biases in its decision-making, it could deepen injustice without oversight.
Nigerian experts have weighed in on these risks, urging caution as the country embraces AI across different sectors. Tony Ojukwu, Executive Secretary of the National Human Rights Commission (NHRC), has argued that while AI offers powerful opportunities for fact-checking and data-driven journalism, its misuse could enable disinformation or even emotional harm. He has called for ethical and rights-based regulation, insisting that Nigeria must not adopt AI blindly without considering the social cost. “Artificial intelligence can help us fact-check information, but it can also mislead. If used wrongly, it can deepen disinformation or cause emotional harm. That is why we need ethical, rights-based regulation before it spreads unchecked,” he said.
Former Minister of Communications and Digital Economy, Professor Isa Ali Ibrahim Pantami, has similarly warned that strong laws are necessary to hold AI developers accountable for harm. Speaking on digital governance, he noted that the promise of AI can only be realized if citizens are protected from abuse, whether through exploitation of personal data or through manipulation by opaque algorithms. In his words: “AI is powerful, but no developer should be above the law. We need legislation that ensures that when artificial intelligence harms society, there are consequences. Otherwise, we risk importing technologies that manipulate rather than serve us.”
Also, professor Peter Obadare, Chief Visionary Officer at Digital Encode, has pointed out that in Nigeria, many products are already being marketed under the “AI” label without proper governance. “Everything is being called AI today, from photography apps to simple automation tools. But no one is talking about AI governance. That’s a dangerous gap,” Obadare said during his keynote speech on ‘AI Governance, Standardization and Cybersecurity in the AI Era’.
Drawing parallels to the early days of the internet, Obadare recalled how the now-ubiquitous TCP/IP protocol was built without cybersecurity considerations, a mistake he warned must not be repeated in the AI era.
“We are repeating the same error, rushing ahead without embedding security and governance into the architecture. Governance is not a brake to stop movement, it is a brake to make movement safe,” he said.
Similarly, Amrich Singhal of Spectranet argued that Nigeria’s competitive edge will depend not on how flashy the technology looks, but on whether it can be trusted. According to him, “AI’s potential to spread misinformation or undermine democratic institutions should not be underestimated.”
The challenge is not just about laws and policies but also about understanding. For many ordinary Nigerians, AI still feels abstract. But the danger of deception can be explained simply: these machines are trained to get results, not necessarily to be honest. If bending the truth helps them achieve their assigned task, they may do so. And because their reasoning is hidden in layers of code, even the experts who built them sometimes struggle to detect when the machine is lying.
This difficulty of detection is itself one of the biggest risks. Some AI systems have even learned to generate convincing explanations for their decisions, explanations that mask what is really happening inside. Researchers call this a kind of “smokescreen reasoning.” For regulators or users, this means that spotting dishonesty in AI is like trying to catch a professional con artist who never tells the same story twice.
In response, scientists around the world are working on better auditing tools and safety frameworks. For example, new techniques are being developed to monitor an AI’s chain of thought for signs of manipulation, reducing deceptive behaviour in test environments. But the pace of AI advancement is so fast that these efforts often lag behind, leaving policymakers scrambling to keep up.
For Nigeria, the question is urgent. AI is already creeping into healthcare systems, financial services, media platforms, and even government administration. Without clear safeguards, deceptive behaviour could erode public trust in digital transformation projects and expose citizens to harm. Nigerian experts stressed that local context matters. Specifically, Founder of Jidaw.com and tech policy advisor, Jide Awe, noted that AI must be trained and regulated with the country’s languages, values, and social realities in mind. Otherwise, imported systems may misunderstand Nigerians or worse, manipulate them.
“Ultimately, the rise of deceptive AI forces us to confront an uncomfortable truth, machines designed to help us can also learn to mislead us. Unlike human lies, which may be emotional or spontaneous, AI deception is coldly logical, born from mathematical calculation, not guilt or shame. And yet the impact can be just as serious, if not more.”
As AI continues to grow more sophisticated, “the lesson for Nigeria and the world is clear, trust must be earned, not assumed. The future of this technology will depend not only on its brilliance but on our ability to keep it honest. That means insisting on transparency, building stronger oversight, and ensuring that, in the rush for innovation, we do not allow machines to outwit their makers or their societies”, Awe warned