Technology

Without the public’s trust, AI is doomed to fail

By Massoud Amin

Copyright scmp

Without the public’s trust, AI is doomed to fail

Artificial intelligence (AI) is already helping to decide who gets a job interview, a loan or parole. No one voted for these systems, yet their choices shape daily life.
Governments are moving fast. The European Union has passed the AI Act. The United States issued an executive order and a federal guidance plan. Britain convened an AI Safety Summit. However, none of these efforts answer the deeper question: do people accept the authority these systems wield? An accurate tool can still fail if the public never grants it legitimacy.
Legitimacy is the quiet foundation of durable institutions. Courts, legislatures and central banks function because many citizens accept how they exercise power, even when outcomes are contested. AI companies hold comparable influence. They moderate speech on platforms used by billions of people. They are building systems that touch medicine, energy, transport and national security. Yet the public has had little say in delegating that authority or setting boundaries for it.
History shows what happens when legitimacy is ignored. Nuclear power was sold in the 1950s as cheap and abundant. Its development was at times shrouded in secrecy. After accidents like Three Mile Island and Chernobyl, public confidence withered. Projects stalled for decades. Genetically modified crops in Europe followed a similar arc of strong science, weak consent and lasting resistance.
The Boeing 737 Max tragedies offer another parallel. Software introduced without transparency contributed to two crashes and 346 deaths. The planes returned to service; the damage to their reputation endures. Once trust is lost, rebuilding it costs more than establishing it correctly from the start.
AI shows the same warning signs. In 2018, the Cambridge Analytica scandal revealed how Facebook data had been harvested and weaponised to influence elections. The breach was not just technical; it violated the basic expectation of consent.

In US courts, the COMPAS tool was used in sentencing; subsequent reporting revealed racial disparities in its outputs.
In 2020, Britain’s exam regulator used an algorithm to assign A-level grades amid Covid-19 pandemic disruptions. The model favoured historical school performance over individual potential. Students protested, universities baulked and the policy was ultimately reversed. The system’s technical logic was less important than the absence of consent and a fair path to challenge the result.
These cases highlight how guard rails alone do not establish legitimacy. Bias audits and cybersecurity measures such as red-teaming are necessary but by no means sufficient. If people believe a system is opaque and unaccountable, they will resist it and regulators will overcorrect. The question is not whether AI can be made more accurate but whether it can be governed in ways that citizens recognise as fair, transparent and accountable.
That recognition requires concrete commitments. First, when systems affect rights or critical opportunities – such as hiring, credit, health, education and public safety – people must be able to understand how decisions are made. This does not require disclosing source code. It requires clear explanations, documentation of factors and independent testing that is reported in plain language.
Second, responsibility must be tangible. When an AI system causes harm, there must be a named legal entity that is accountable for providing redress and implementing improvements. Today, that line of responsibility is too often blurred by vendor contracts and disclaimers.

Third, there must be due process. If a model denies a benefit or raises a risk score, the person affected needs a path to review, correction and human judgment. Contestability is not a courtesy; it’s what makes authority tolerable in a democracy.
Fourth, transparency must go beyond product launches. Systems need continuous monitoring, incident reporting and the equivalent of an aviation safety board: independent investigators with access and the power to recommend changes that are actually implemented.
Finally, public procurement should set the standard. When governments purchase or develop AI, they can require explainability, audit trails and contractual redress.
This moment is pivotal because AI is shifting from novelty to infrastructure. It is being woven into search, medicine, logistics, finance, education and the operation of electric grids. These are not side projects. They are systems the public must depend on in crisis and calm alike. If legitimacy is neglected, adoption will stall. The result will be wasted innovation and a deeper mistrust that bleeds into other institutions.

We don’t have to stop innovation. But we must innovate with care. Companies that build high-impact systems should publish model cards – data sheets that function as consumer safety labels – and risk reports that can be independently tested.
Regulators should set clear thresholds for when stronger obligations apply and enforce them without drama. Universities should teach engineers how to design based on people’s rights as well as performance. Nations should work towards interoperable rules, so legitimacy does not fracture at every border.
AI will not rise or fall solely on technical brilliance. It will succeed if people believe it is legitimate. That means visible accountability, sensible explanations and remedies when things go wrong. If we take legitimacy seriously, AI can be both innovative and trusted. If we treat it as an afterthought, we will repeat an old pattern: impressive technology stalled by public rejection.
Legitimacy is not a luxury in governance. It is the most important system of them all.