By Julian Ryall
Copyright scmp
Japan’s police are turning to artificial intelligence (AI) to identify individuals who appear likely to commit “terrorist attacks” based on their social media posts, but observers say the move could backfire by sweeping up innocent citizens engaging in normal political discussion.
The National Police Agency is seeking 49.5 million yen (US$338,000) in next year’s budget for a pilot project that would use AI to analyse online activity and flag individuals deemed potential threats.
Officials say the move is necessary to counter a sharp rise in threatening posts on social media – particularly those targeting politicians. Ahead of July’s general election, police identified 889 online posts considered potential signs of an impending attack.
One message on the Instagram feed of Prime Minister Shigeru Ishiba read, “It would not be surprising if someone tried to kill you, so you should wear a helmet and a bulletproof vest.”
Shortly before former prime minister Fumio Kishida was due to give a campaign speech outside a station in Chiba prefecture, a message appeared on his X page stating, “I’ll kill you if you come.”
In that incident, police were able to identify the poster and issued a warning. The person who wrote the message told police that he had posted it when he was drunk, the Asahi Shimbun newspaper reported.
While officials frame the AI initiative as a counterterrorism tool, critics say the threat is being overstated – and that the system’s broader impact on civil liberties remains unclear.
“The police are saying this system is needed to stop terrorist attacks, but I have to say that there are very few terrorists in Japan, although there are underworld crime groups and some radical student groups still,” said Shinichi Ishizuka, founder of the Tokyo-based Criminal Justice Future think tank.
“My sense is that this could be a very powerful tool and that it will generate huge amounts of information because it will flag keywords on people’s social media or stored in their computers,” he told This Week in Asia.
“The problem is that it is going to flag people who are not planning to carry out some sort of attack, but are just having normal political conversations that sometimes use words that the AI is searching for.”
Ishizuka also questions the effectiveness of a system meant to identify “lone wolf” assailants as, by their very nature, they are not going to communicate their plans with other individuals.
“It can be called ‘disorganised crime’ because it will be very difficult for anyone to make connections when these people are not communicating about their plans,” he said.
The project is linked to a section within the Tokyo Metropolitan Police Department that was set up in April to identify potential “lone wolf” assailants and to thwart their plans.
The unit was created after former prime minister Shinzo Abe was shot dead in July 2022 and Kishida was the target of a pipe bomb attack in April 2023. In both cases, the assailants were disgruntled young men acting out of a personal grievance.
Tetsuya Yamagami, who has been declared mentally competent and is due to go on trial on October 28, has told investigators that he was angry at Abe for his links to the controversial Unification Church, to which his mother had donated huge sums of money that left the family destitute.
In February, 25-year-old Ryuji Kimura was sentenced to 10 years in prison for attempting to kill Kishida as he spoke at a campaign event in Wakayama prefecture. Kimura said he acted out of frustration at Japan’s election laws, which do not permit anyone under the age of 25 to stand for the lower house of the Diet and 30 for the upper house.
Comments on social media indicate that there is support for the plan, with one message linked to an Asahi Shimbun report stating, “It seems it would be extremely difficult to operate an AI-based system that would determine whether to arrest someone the moment they commit a crime or arrest them for conspiracy, but I think it’s worth praising for trying something new.”
Another added, “For those who have nothing to hide, I don’t think they’ll be particularly concerned about having their information analysed and I hope things go smoothly.”
A third said technology would be better at identifying threats, as it would be able to scan vast amounts of data far more rapidly and efficiently than teams of human investigators.
Others, however, expressed concern.
“If this system proves useful, it could eventually be legalised, leading to a dystopia in which the government collects all digital data, including emails and phone calls, without any barriers,” one poster wrote. “I support crime prediction, but I hope they don’t cross the line when it comes to handling private data.”
Stephen Nagy, a professor of international relations at Tokyo’s International Christian University, also questions police linking the AI-based system to the threat of terrorism in Japan.
“The police seem to have put a lot of effort into the anti-terrorism campaign recently – I have seen posters at the airport and train stations – but I do not have a sense that there are any domestic terrorist groups now,” he said.
Police, however, do not seem to be using AI to combat other crimes such as the illegal actions of “yakuza” groups, which according to Nagy are arguably more pressing and pose a greater danger to ordinary Japanese.
Anti-crime schemes that utilise AI have been introduced in a number of other countries.
A machine learning model developed at the University of Chicago has been reported to be 90 per cent accurate in predicting crime in urban areas of the US, while the Dejaview system in South Korea analyses closed-circuit television footage in real time and factors in crime statistics to detect signs of offences being committed.