Copyright kyivpost

Blockbuster deals, market windfalls, warfare, and controversies involving artificial intelligence (AI) dominate newscasts as we enter a new era of business and technology. What we call “AI” today is artificial generative intelligence, a collection of powerful statistical engines that can perform specific tasks by using machine learning algorithms to process large amounts of data, identify patterns, and learn to make decisions or perform tasks without explicit programming. However, the next technological leap – the development of Artificial General Intelligence (AGI) – is only a few years away and will upend human existence. AGI remains hypothetical, but when it arrives, it won’t be about building tools, but will be about building entities with human-level cognitive abilities that can understand, learn, and apply knowledge across a vast range of intellectual tasks. AI performs tasks, but AGI will be self-sufficient, potentially self-directed, and is just a few years away. Geoffrey Hinton, the “Godfather of AI,” worries that AGI will advance so quickly that it will slip beyond human control. “The risks of misuse, accidents, and societal harm are profound. We’re not prepared.” Human interaction with generative artificial intelligence chatbots – such as Grok or ChatGPT – is already creating concern. In Belgium, a man recently committed suicide after weeks of conversations with an AI chatbot that encouraged his despair and decision to kill himself. In the United States, a father killed his children, and a 47-year-old tech professional killed himself and his mother, after each had relied on advice and encouragement from AI chatbots. Usage of these bots is staggering already, and their interactions are unimpeded. In 2025, ChatGPT alone received more than 2.5 billion daily prompts from users worldwide, reported Axios. AI’s potential to encourage harm is not confined to mental health risks. Another problem is that the number of deepfakes – AI-generated photos, audio, and video – is exploding online. A fake image of Pope Francis in a designer coat went viral in 2023. That was amusing. However, fake images of world leaders declaring war, or candidates confessing to crimes they never committed, will not be funny. The US election in 2024 was the first in which AI-generated disinformation became mainstream. By 2030, distinguishing truth from falsehood may be impossible without expensive verification tools – creating new divisions between information “haves” and “have-nots.” Democracies may not survive such an onslaught. There is also a legal vacuum, and AI companies are shielded by the same liability loopholes that allowed Facebook and Twitter to be absolved for reproducing damaging social media posts. Today, no company can be held fully liable for the damage. But if such immunity continues, and regulation is not in place, consequences will be more serious by 2030 when AI firms create AGI entities that will cross the threshold. Their systems will reason, learn, and plan across domains like a human mind in lightning speed and scale. Already, AI has disrupted industries, education, media, and politics, but AGI could transform civilization itself. AGIs will be able to understand, reason, replicate, predict, and learn, and include emotional and contextual awareness. This will be the big leap. The danger is that the line between the two may blur quickly, and in a few decades, we may face not just disruptive machines but autonomous intelligences beyond our control. When does AGI take hold? Experts disagree as to when this will happen: Sam Altman (CEO of OpenAI) predicts ChatGPT will be capable of reasoning, learning, and planning across domains like a human mind by the early 2030s. “We know how to build AGI,” he said in early 2024. Jensen Huang (CEO of Nvidia) believes that within five years, AI will match human capability if performance by hardware increases with the exponential growth in computing power. Geoffrey Hinton (Pioneer of Deep Learning) says it’s hard to predict and may occur sometime between 2028 and 2043. But the most dangerous consequence of AI – and potentially AGI – is geopolitical. Nations see it not just as an economic tool but as a weapon. Cartels and mafias will see it as a profit center and a tool. Two years ago, Eric Schmidt warned that AI could transform warfare by 2027, with autonomous drones, swarms of robotic soldiers, and predictive cyberattacks. However, the “Frankenstein War” has already arrived in Ukraine, where the world’s most talented IT sector is already present and has reinvented and digitized warfare. China is also racing ahead, integrating AI into surveillance, military logistics, and censorship. Beijing has declared it will be the world’s AI leader by 2030. It has the advantages of a government run by technocrats, a massive population, lax privacy standards, and a government willing to mobilize resources at scale. The United States still leads, and its technology is controlled by private firms whose interests don’t necessarily align with national security or democratic values. America has the expertise and capital, but a libertarian and deregulatory mindset. President Donald Trump has embraced Silicon Valley and its leaders, leading Washington for the first time into state capitalism, and has bought stakes in the country’s tech monoliths such as Intel and Nvidia. Trump is correct in bringing tech experts into government decisions because they represent the biggest governance challenge in history. US tech giants will reap trillions globally from their AI and AGI innovations and disrupt employment on a mass scale. Amazon just slashed 30,000 jobs from its payroll, with more cuts to follow. To their credit, some tech leaders have already promoted the fact that governments must devise a Universal Basic Income (UBI) to retain social stability, economic well-being, and provide time for “white collar workers” to transition into new occupations or regions. Several tech billionaires are personally financing UBI pilot projects to illustrate the need and possible solutions. Goldman Sachs projects that 300 million jobs could be automated by generative AI worldwide, and lawyers, accountants, journalists, and computer programmers are vulnerable. Then there’s AGI. The human condition will worsen unless governments rein in technology, regulate it, prohibit monopolization, and outlaw weapons, fraud, and media manipulation. There must be global standards, as is the case with nuclear weapons. The staggering wealth that will be generated cannot be hoarded, but must be captured through taxation to provide support and training for dispossessed workers as well as for research into conquering diseases and solving other planetary challenges. In other words, we must devise ways to control it before it controls us. Albert Einstein wisely stated, “It has become appallingly obvious that our technology has exceeded our humanity”. After the creation of the atomic bomb, he stressed the need for ethical considerations and wisdom to ensure that technological advancements serve to uplift, rather than diminish, humanity. That time has arrived again. Reprinted from [email protected] – Diane Francis on America and the World. The views expressed in this opinion article are the author’s and not necessarily those of Kyiv Post.