Technology

Generating Jihad: How ISIS Could Use AI to Plan Its Next Attack

By Tom O’connor

Copyright newsweek

Generating Jihad: How ISIS Could Use AI to Plan Its Next Attack

With artificial intelligence increasingly becoming a part of the everyday lives of Americans, so too are malicious actors seeking to exploit emerging AI technologies and applications in order to pursue harmful, even deadly agendas.Among them is the Islamic State (ISIS), the militant group known for its tech-savvy online presence that has helped it recruit and maintain a global following despite battlefield losses. Such tactics have proven capable in the past of outpacing efforts by governments and companies to counter them, a risk compounded by the novel nature of recent AI breakthroughs.Now, experts warn that AI in the hands of ISIS marks a new turning point at a time when the group and its acolytes are looking to mount an international comeback bolstered by cutting-edge developments in the digital realm.From creating fake news anchors to sourcing supplies for new operations—and future threats not far in the horizon—”we’ve moved from the hypothetical into reality on the use of AI by extremist groups,” Samuel Hunter, senior scientist and academic research director at the University of Omaha’s National Counterterrorism Innovation Technology and Education Center of Excellence, told Newsweek.Generating JihadAs with the average user, ISIS appears to thus far largely be using AI to enhance the execution of traditional tasks. The most frequent form of this is the use of generative AI, or GenAI, to create and spread content at rapid speeds and through more enticing means.”As of now, the most common use cases by extremists such as the Islamic State have been propaganda development,” Hunter said. “The Islamic State has historically been fairly adept and sophisticated on the digital front and it is not surprising to see them pioneering work with GenAI.”He calls the group’s movement into video applications of this technology “novel and concerning.”One such example that garnered attention last year was the emergence of AI-generated news anchors delivering ISIS propaganda to online audiences in the wake of the group’s deadly attack at a concern hall outside of Moscow.It was just one of an number of such clips produced as part of an AI-driven initiative reportedly referred to as “News Harvest,” designed to narrate the group’s activities around the world.A key driver of this campaign appears to be supporters of one of ISIS’ most internationally active affiliates, the Afghanistan-based Khorasan province, also known as ISIS-K or ISKP. ISIS-K has claimed responsibility for two of the bloodiest attacks in the group’s recent history, including the attack in Russia and an earlier strike in Iran.Though some adhering to ISIS’ ultrafundamentalist outlook express skepticism, even rejection, of adopting innovative technology, ISIS-K has also sought to spread awareness of both risks and opportunities associated with AI use via the branch’s Voice of Khorsan magazine produced by its Al-Azaim Foundation.ISIS supporters have even taken to weaponizing recognizable forms of Western media, such as an episode of the popular cartoon Family Guy manipulated to depict the main character reciting an Islamist hymn.But ISIS’ use of generative AI is not limited to spreading their jihadi message. Hunter pointed out that the group is finding ways to employ such tools in plotting real-life attacks as well.”Trends in other domains suggest that the use of GenAI in developing new tools such as novel IEDs or tactics is either in place now or will be shortly,” Hunter said.AI Agents of ISISPotentially even more game changing is the anticipated dawn of agentic AI, which describes the employment of AI programs to run complex operations more autonomously and efficiently than previously possible.”On the propaganda side, scale and speed are increased and as a by-product, challenges around identifying and mitigating propaganda are made more complex,” Hunter said. “What might take several humans to create and distribute can now be done at scale, faster, and with a dizzying ability to shift and target changing audiences.””Moreover,” he added, “one of the main features of agentic AI is its adaptability based on real-time data, historical trends, and unexpected shifts, making agentic tools more resilient than traditional GenAI models.”Adam Hadley, founder and executive director of Tech Against Terrorism, warned such breakthroughs could afford ISIS far greater capabilities to plot and execute attacks.”I think it’s really important not to be complacent, because the rate of advancement in AI is just mind boggling, isn’t it?” Hanley told Newsweek. “Day after day, there are new capabilities. And I think what’s, what’s around the corner is potentially quite scary, which is agentic AI, so the idea of using an AI as an agent to run other technologies start doing really complex operations on your behalf.””So, for example,” he said, “in a few months’ time, it might be possible to get an agentic system to say, scour the internet for all precursor bomb materials and buy it for me and send it to these addresses.”Beyond the generation of images and videos as propaganda that still form the vast majority of ISIS-associated AI content, agentic AI “will all of a sudden be able to do activity that otherwise will take a terrorist hundreds of hours to do,” Hadley said.”There is a real risk that agentic AI could be used in that way,” Hadley said, “and it’s not just that AI could start doing your shopping around the corner, but it could well be a terrorist using to shop bomb making material.”So, it’s those sorts of things that I think are really important,” he added, “and my worry, actually, is a lot of the AI companies are not really thinking about these risks at the moment.”Cat-and-Mouse with the CaliphateThe use of emerging technologies has long been a key method employed by militant groups and other extremist organizations to win over new audiences and exploit gaps in awareness by both private and state institutions.The rise of Al-Qaeda in the late 1980s and early 1990s roughly coincided with the beginning of the Internet era, and supporters found fertile ground to sow the seeds of jihadi ideology on websites and forums subject to little oversight at the time.The group’s dissemination of physical media, including VHS tapes and CDs, also marked a new epoch for militants now capable of sharing training videos to supporters and broadcasting messages on major news outlets to international audiences.ISIS, whose origins can be traced to Al-Qaeda in Iraq, took things a step further since emerging on the global stage in 2014 in a series of kidnappings and grisly beheading videos shared over the web. The group would go on to produce a wealth of regularly published online literature and Hollywood-style videos detailing claimed victories across the Middle East and beyond.As the U.S. military joined an array of local, regional and international players to dismantle the group in Iraq and Syria, the Pentagon also mounted an accompanying cyber campaign, known as Operation Glowing Symphony, to take on the group’s vast digital footprint.Yet six years after the group’s core “caliphate” was declared defeated in 2019, ISIS’ media strategy lives on and has even expanded, with material now being produced in more languages and formats than ever before. As the group continues to conduct attacks in Iraq and Syria, the ISIS brand is also on the rise via ISIS-K in Afghanistan and a number of partner groups across several regions in Africa.International responses have been further hampered by an uptick in global crises, which ISIS has used to further drive division and recruit among disaffected populations in nearly every corner of the globe.But as President Donald Trump once again steps up strikes against ISIS positions abroad, some in Washington are advocating for greater action on the domestic front to counter the group’s advances in the AI domain.Representative August Pfluger of Texas, who serves as chair of the House Committee on Homeland Security’s Subcommittee on Counterterrorism and Intelligence, introduced the Generative AI Terrorism Risk Assessment Act in February to address this risk.”While artificial intelligence is a transformational technology tool with immense potential for good, it can also be dangerously weaponized in the wrong hands and pose a serious threat to our national security,” Pfluger told Newsweek. “Foreign terrorist organizations (FTOs) are actively seeking ways to exploit the application to recruit, radicalize, and inspire attacks on U.S. soil.”Pfluger, a Republican who previously served on the National Security Council during the first Trump administration, pointed to an example earlier this year in which Al-Qaeda “launched a workshop to enhance skills in using AI and related software.”In response, the congressman convened a hearing to examine how militant groups were using generative AI “to recruit and radicalize lone wolf actors online,” finding that “these organizations do in fact produce highly convincing propaganda videos using GenAI, fabricating events and manipulate the perception of potential recruits.”Based on these findings, Pfluger said his bill, which advanced earlier this month, “is critical to take active steps to ensure GenAI is not weaponized by FTOs like ISIS and al Qaeda.””Our policy must be proactive to keep up with the emerging threat and ensure the technology is harnessed for good,” Pfluger said, “and that Free World continues to lead when it comes to AI.”The New FrontlinesFederal agencies are also moving to overcome the threat posed by AI in the hands of ISIS and other bad actors. These efforts are outlined by guidelines produced by the likes of the FBI, the Department of Homeland Security, the Office of the Director of National Intelligence’s National Counterterrorism Center and the National Institute of Standards and Technology.The issue has also caught the attention of United Nations experts, who warned in a report issued in July that ISIS, also known as ISIL or Daesh, and its various affiliates have “continued to experiment with artificial intelligence (AI), mostly for radicalization and recruitment, and to amplify or enhance propaganda.””For example, Al-Shabaab released a series of messages that were translated into various languages using AI tools,” the report said. ISIL (Da’esh) previously released guidance on how to use Generative AI tools, including ChatGPT, whilst avoiding detection. There was some reporting to suggest that ISIL (Da’esh) was targeting recruitment of cyber experts to bolster its capabilities in this area.”As such, the race goes on, as groups and individuals with malicious intent scramble to stay ahead of obstacles.”The challenge, of course, is that technology can often evolve faster than guidelines can be created—at least as we currently conceive of guidelines,” Hunter said. “Groups such as the Islamic State can take advantage of this gap and exploit it by leaning into novel use cases.”Ghafar Hussain, fellow at George Washington University’s Program on Extremism, pointed out how ISIS was making strides on four different fronts in this field: “extremist chat bots, generative and agentic AI, gaming and predictive analytics.””Bots can be programmed to reflect extremist views or be hybrid bots were there is human intervention too, this is now relatively unsophisticated in technological terms,” Hussain told Newsweek. “Generative and agentic AI are very popular with militant groups like ISIS, since it allows them to edit and manipulate actual footage of attacks or militant engagements in their favor to create entirely synthetic propaganda of events that did not happen.””AI programmed bots can also be used on gaming platforms to promote extremist messages to recruit people and generative AI assets can be used in games like Roblox and Minecraft where users create whole worlds,” he added. “Predictive analytics, which again is now relatively cheap and accessible, can be used to zone in on the target audience and scan social media to seek out those who are more sympathetic.”In each of these cases, he said, “law enforcement and policy makers are always playing catch up and passing laws that have limited impact.”One example he pointed out was the European Union’s Digital Services Act and the United Kingdom’s Online Harms Bill, which Hussain argued “place emphasis on content moderation, when the real issue is algorithms and dark web forums which are unregulated.”Yet even these legislative moves have drawn controversy from critics arguing they impede on internet freedoms, further complicating the fight for democracies seeking to strike a balance between national security and online liberties.”So, there are risks which they are not really grappling with in my view and often their understanding is so out of date they can’t,” Hussain said. “However, without creating a Chinese style surveillance state, it is difficult to fully regulate every aspect of technology.”