Business

The age of endless AI slop is here

The age of endless AI slop is here

Really, it’s almost unfair to hold a tech company to its mission statement. From Google’s “Don’t Be Evil” to WeWork’s “Elevate the World’s Consciousness,” mission statements are usually written in a company’s adolescence, at that awkward moment when their dreams stretch to the horizon, the venture capitalists are all smiles, and no one has heard of the term “fiduciary responsibility.” It’s like judging someone based on the sentiments expressed in the back of their high school yearbook.
But OpenAI, you are pushing it.
Navigate to the company’s About page, and you’ll still read these words, which first appeared in its 2018 charter, three years after its founding: “Our mission is to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.” It is, to say the least, not something they’ve always lived up to, as some of Future Perfect’s coverage of the company has demonstrated. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
Look, if you’d asked me what my mission statement was when I was 3 years old, it probably would have been, “Become the first NBA player to land on Mars.” We don’t always achieve what we set out to do. Priorities change, you don’t grow to 7-foot-2, it turns out you’re scared of space — you know what I mean.
But with its latest product — the AI-generated video social network Sora 2 — OpenAI may have set the all-time record for greatest distance between mission statement and actual work.
Infinite servings of AI slop
The best way to understand Sora 2 is that it marries perhaps the worst aspect of large language models like ChatGPT — their potent ability to get users hooked on them — with what is indisputably the worst aspect of modern media: the endless scroll of mindless vertical videos, which among other negative effects, has nuked our attention spans.
It’s like taking heroin and mixing it with…I don’t know, is there a drug that is highly addictive, renders you slack-jawed before a screen, and subtracts a few dozen IQ points? Heroin, with, like, more heroin? I’m not actually sure I have the drug experience to answer this question.
The basic problems posed by uncannily real AI-generated videos are obvious and materialized almost instantly upon Sora 2’s launch earlier this week.
This story was first featured in the Future Perfect newsletter.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
Take copyright infringement. One of the first Sora 2 videos I came across was a perfectly rendered Rick and Morty visiting SpongeBob SquarePants, and yes, my soul died a little writing that sentence. It turns out OpenAI set Sora 2 to allow copyrighted material by default, putting the onus on intellectual property holders to proactively ask OpenAI to, pretty please, take their material out. Which it will — though not before OpenAI reaps the social network buzz of all those Rick and Morty clones, just as it did during the brief craze for Studio Ghiblifying your photos. (I know you all still have them somewhere.)
Then, there are the deepfakes. One of the killer features of Sora 2 is that you can upload your image into the app and then pop it into any AI-generated video you wish, or allow your friends to do so, or — if your personal alignment is chaotic neutral — allow any Sora user to harness it. Well, as the Washington Post reported, it took about five seconds before clips started to be generated of fake police bodycam footage; real people dressed as Nazi generals; highly realistic but fake footage of historical events; and yes, OpenAI CEO Sam Altman shoplifting.
What this means is that, at the very moment when the President of the United States is posting apparently AI-generated deepfake videos of the Democratic House minority leader in a sombrero and mustache, OpenAI has just handed Americans — at least those who have an Sora 2 access code — perfectly realistic fake video with the push of a button. And created a TikTok-like social network on which it could be shared. While it’s good that OpenAI has included rules to ban impersonation, scams and fraud, and guardrails to block nudity and graphic violence, it does feel a bit like saying an automatic weapon with a safety is totally harmless.
Somehow, I doubt that any of that will “benefit all of humanity.” But it will almost certainly benefit OpenAI’s bottom line, at a moment when the company was just valued at $500 billion — beating even SpaceX — and when noises about an AI bubble are becoming impossible to ignore. (Quick, Sora 2, generate me a video of what will be left of the US economy when the only industry driving it goes ka-blooey!)
Who benefits?
So. What does Sam Altman think about all of this? Fortunately, Sam has been keeping a blog since at least 2013, back when he was pondering the possible existence of aliens. Sora 2, he wrote, is a “‘ChatGPT for creativity’ moment,” one that could lead to a “Cambrian explosion,” where the “quality of art and entertainment can drastically increase.”
Which…I guess? Certainly the post-Cambrian age led to some pretty weird creatures, like the Tullimonstrum, or “Tully monster,” a stalk-eyed creature with a grabber hose for a mouth that looks like something you might get if God could score a Sora 2 access code. If the mindless deepfake remix machine that is Sora 2 is what will be considered creativity in the future, just give me the paperclip maximizer AI.
But perhaps the worst part about Sora 2 — and similar AI slop generators from Meta — is that it overshadows the AI work that actually could benefit all of humanity. The same week OpenAI unleashed Sora 2 upon us, a number of exiles from big AI companies announced the launch of Periodic Labs, a startup that aims to use artificial intelligence to accelerate discoveries in physics, chemistry, and other scientific fields. You know, stuff we could actually use.
Maybe it’s too simplistic of me to ask the best and most highly compensated minds of my generation toiling away at AI companies to, like, do this instead of that. After all, OpenAI is a business. (Or a nonprofit? Or a public benefit corporation? Honestly it’s a little up in the air at the moment.) It follows what the market dictates. Which means the last line of defense is for us, the users of the world, to stand up and say, “No, I will not eat your AI slop.”
Of course, at last check, Sora was No. 3 on the iPhone app chart. We’re all screwed.