Copyright stabroeknews

There is no getting around the fact that Artificial Intelligence (AI) is here to stay. Like many other advances in human civilization, it brings with it both great potential and profound dangers. The Guyanese public, for instance, was recently shaken by reports of two secondary school students using generative AI to create and circulate pornographic material involving one of their teachers. On a more global but equally disturbing scale, one might recall the case of an Iranian nuclear scientist whose assassination in 2020 was reportedly carried out by an AI-assisted system. These are not isolated incidents—they signal a broader dilemma: the growing integration of AI into human society without adequate ethical, legal, or civic safeguards. In 2025, UNESCO responded to this emerging crisis by publishing a report ‘Artificial Intelligence and Democracy’, which directly addresses the encroachment of AI on public discourse and democratic life. The report specifically highlights how AI systems, particularly when left unregulated, threaten the foundational principles of democracy. From the spread of disinformation and hate speech—issues already plaguing Guyana during its recent election cycles and online spaces—to the exploitation of big data and the undermining of justice, AI introduces new risks to societal stability. The UNESCO report is timely, and Guyana would do well to reflect on its findings and seriously consider its policy recommendations. Democracy, after all, is intended to be a model of coexistence—a compact shaped by history, designed to balance individual rights with collective responsibilities. In its ideal form, democracy ensures that citizens are well-informed, actively engaged, and empowered to shape the institutions and decisions that affect their lives. The current digital ecosystems however undermine these democratic aspirations with their capacity to manipulate attention, curate reality, and predict behaviour, subtly but powerfully reshaping how citizens engage with politics, with each other, and with truth. Once hailed as tools of democratization, digital platforms today are fostering division, confusion, and disengagement. The viral spread of fake news, the normalization of hate speech, and the rampant proliferation of conspiracy theories are direct consequences of a digital environment designed to maximize engagement. It is quite normal now to encounter algorithms that incentivize outrage and polarization, reward content that provokes, shocks, or confirms biases, while filtering out nuanced debate or alternative viewpoints. The result is the creation of ideological echo chambers that isolate users from dissenting perspectives which undermines pluralism and corrodes the very deliberative capacity that democracy depends on. Even more troubling is the fact that civic engagement itself has been “platformized.” Increasingly, Guyanese are experiencing political discourse and mobilization through private digital platforms, whose algorithmic logic and corporate interests operate far outside the scope of democratic oversight. AI, for example, offers governments and political actors new tools—from sentiment analysis to micro-targeted ads—to engage with citizens but it also provides powerful means of manipulation: precision propaganda, behavioural nudging, and psychographic profiling which are used to subtly—or blatantly—sway public opinion, suppress voter turnout, or delegitimize opposition. This digital asymmetry is compounded by structural inequalities. Not everyone in Guyana—or globally—has equal access to the internet, digital tools, or algorithmic visibility. Digital literacy varies significantly, and marginalized communities are often the most vulnerable to online abuse or misrepresentation. Worse, biased data can encode and amplify existing social injustices. Consider AI systems that are trained on flawed or skewed datasets. These can produce and sustain discriminatory outcomes, thereby reinforcing rather than correcting historical inequities. One of the most dangerous characteristics of AI is its opacity. Algorithms are increasingly being used to allocate public resources, inform policy, and even influence judicial decisions yet their internal logic often remains inaccessible or incomprehensible to those affected. This undermines core democratic values like transparency, accountability, and due process. Citizens cannot contest decisions they don’t understand—or even know are being made. To address these interconnected threats, the UNESCO report outlines a series of key policy responses. First, citizens must be equipped to critically engage with AI and digital media. This means integrating algorithmic literacy, data ethics, and critical thinking into civic education from the school level up. Public discourse around AI must be demystified, grounded in balanced information rather than hype or panic. Second, AI governance should be guided by democratic values and include robust oversight mechanisms. The Guyana government must act proactively—not reactively—with clear frameworks that enforce transparency, human rights protections, and algorithmic accountability. Systems used in public decision-making should be explainable and contestable. Third, data—the fuel that powers AI—should not be treated solely as private property. It must be governed as a shared resource. This requires policies that protect privacy, ensure equitable access, and prevent monopolistic or exploitative data practices. Fourth, AI systems must be intentionally designed to reflect and respect social diversity. This includes using representative datasets, building diverse development teams, and actively mitigating algorithmic bias. Gender, race, culture, and local contexts should be central to design—not afterthoughts. Fifth, because AI technologies transcend borders, global frameworks are necessary. UNESCO’s Recommendation on the Ethics of AI provides a valuable foundation, proposing not just ethical principles but also practical tools like ethics officers, redress mechanisms, and public impact assessments. Sixth and finally, the governance of AI must be participatory. Government, civil society, academia, and private actors should co-develop and co-steward AI systems. Most importantly, citizens must be given real avenues to shape the technologies that shape their lives. Artificial intelligence, after all like all other technologies, is neither inherently good nor bad—it is shaped by how it is used.