By Jack Ilmonen
Copyright scmp
In the global race for artificial intelligence (AI), nations rightly chase cutting-edge technologies, big data and data centres heavy with graphics processing units (GPUs). But thought leaders including OpenAI CEO Sam Altman and institutions from the Federation of American Scientists to China’s Ministry of Education are urging investment in educator training and AI literacy for all citizens. They argue for a more human-centred AI strategy.
Having taught AI and data analytics in China, I have seen the payoff: graduates join internet giants, leading electric-vehicle makers and the finance industry.
My case is simple: the country that best educates people to collaborate with AI will lead in productivity, innovation and competitiveness, achieving the highest level of augmented collective intelligence. This reframes the so-called AI war not as a contest of GPUs and algorithms, but as a race to build the most AI-capable human capital. Data and hardware are ammunition; the strategic weapon is AI education.
According to Norwegian Business School professor Vegard Kolbjørnsrud, six principles define how humans and AI can work together in organisations. These principles aren’t just for managers or tech executives; they form a core mindset that should be embedded in any national AI education strategy to improve productivity for professors, teachers and students.
Let’s briefly unpack each principle and how it relates to broader national competitiveness in AI education.
The first is what he calls the addition principle. Organisational intelligence grows when human and digital actors are added effectively. We need to teach citizens to migrate from low-value to higher-level tasks with AI. A nation doesn’t need every citizen to be a machine-learning engineer, but it needs most people to understand how AI augments roles in research and development, healthcare, logistics, manufacturing, finance and creative industries. Thus, governments should democratise AI by investing in platforms that reskill everyone, fast.
The second is the relevance principle. AI is powerful in structured, data-rich tasks, but it fails somewhat in navigating ambiguity, ethics or strategy. Humans thrive in these grey areas. An effective workforce knows when to use AI, when to trust it, and when to override, refine or recalibrate it. From city planners and judges to doctors and nurses, professionals must be trained to match AI’s capabilities to appropriate problems and remain accountable for the outcomes.
The substitution principle is the third. Replacing humans with AI systems is only beneficial if the AI is more capable, or if saved human time is redirected to higher-value work. In national policy terms, this reinforces that automation should be paired with workforce upskilling to achieve new economic value.
AI-driven job displacement is real, but it can also be an opportunity. Countries that retrain displaced workers to take on strategic, creative or supervisory roles will not only reduce unemployment, but also unlock the productivity gains AI promises. Let AI machines and humans each do what they are best at and everybody wins.
The fourth principle is diversity. The most innovative AI outcomes arise when different kinds of minds, human and artificial, work together in diverse teams. Diversity in AI teams should include sectoral knowledge, cultural insights and interdisciplinary thinking. Nations must fund AI education programmes across all sectors, teaching technicians, doctors, nurses, farmers, logistics coordinators and small business owners how AI applies in their world. Diversity solves new problems.
Collaboration is the fifth principle. For AI to succeed at scale, people must not only understand it but also learn to collaborate with it intuitively. That requires the user-centred design of AI systems and, more importantly, AI fluency among users. AI collaboration should become a core competency in national education curricula. Just as basic computer literacy became essential in the 1990s, AI literacy, understanding how to communicate with, prompt, verify and guide AI systems, must now become standard across industries and educational levels.
And then there is the explanation principle. For AI to be used ethically and responsibly, users need to understand how it arrives at conclusions. This is not just a technical issue; it’s educational.
People must be trained to ask questions, seek evidence and evaluate AI recommendations. Incorporating ethics, morals, equity, transparency and critical thinking into AI training will build a culture of accountability and responsible innovation, which are key advantages in the long-term AI race.
Nations must prioritise AI education or risk falling behind more agile competitors. Policy should centre on six moves: one, integrate AI across university, the kindergarten-to-12th-grade and vocational tracks; two, promote cross-disciplinary higher education that pairs AI with, for example, energy, law, agriculture and economics; three, treat AI education as strategic infrastructure with stable funding; four, embed responsible AI practices to build trust and accountability; five, link education to practical innovation through real-world projects; and six, align human capital development with national AI goals and metrics.
Winning the AI race is about teaching people to use AI widely and wisely, to innovate, raise productivity and drive growth. That begins in classrooms, training programmes and workplaces, and ends with a population ready to think alongside machines rather than compete with them.
The shift is under way. Google’s AI Works for America initiative is training workers and students in essential skills. China has issued guidelines for AI education in primary and secondary schools and powerful domestic models such as DeepSeek are diffusing quickly. Oil, steam and electricity mattered only when people knew how to use them. Want to win the AI war? Start learning and using AI tools.