In its first three days, users of a new app from OpenAI deployed artificial intelligence to create strikingly realistic videos of ballot fraud, immigration arrests, protests, crimes and attacks on city streets — none of which took place.
The app, called Sora, requires just a text prompt to create almost any footage a user can dream up. Users can also upload images of themselves, allowing their likeness and voice to become incorporated into imaginary scenes. The app can integrate certain fictional characters, company logos and even deceased celebrities.
Sora — as well as Google’s Veo 3 and other tools like it — could become increasingly fertile breeding grounds for disinformation and abuse, experts said. While worries about A.I.’s ability to enable misleading content and outright fabrications have risen steadily in recent years, Sora’s advances underscore just how much easier such content is to produce, and how much more convincing it is.
“It’s worrisome for consumers who every day are being exposed to God knows how many of these pieces of content,” said Hany Farid, a professor of computer science at the University of California, Berkeley, and a co-founder of GetReal Security. “I worry about it for our democracy. I worry for our economy. I worry about it for our institutions.”
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.