Copyright Vox.com

Grokipedia, Elon Musk’s attempt at creating an alternative to Wikipedia, is now live. Early analysis suggests that the site — powered by Musk’s xAI and fact-checked by Grok, the company’s right-leaning AI assistant — is already a sort of self-sustaining nuclear reaction of misinformation. More than anything, though, Grokipedia represents another front of Musk’s war on wokeness and another example of Musk taking a thing that works — in this case, Wikipedia — creating a broken version of it, and declaring the battle won. User Friendly A weekly dispatch to make sure tech is working for you, instead of overwhelming you. From senior technology correspondent Adam Clark Estes. Email (required) By submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. If Musk gets his way and Grokipedia does become a real Wikipedia competitor, the average internet user faces a problem. We’ve already seen how Musk can flex his wealth and power to turn one platform, X, into a misinformation machine. Creating a repository of that misinformation, one that might train xAI’s model or even competing AI models, is bound to accelerate its spread. It’s not just that Grokipedia might be bad. It might make the rest of the web worse with it. The road to Grokipedia Grokipedia appears to use Wikipedia as its primary source, but injects some far-right politics and conspiracy theories into certain topics before presenting the information as fact. There are currently no photos and no links, which makes the whole thing look a bit like the results of a chatbot prompt, which it effectively is. Grokipedia is also roughly seven times smaller than Wikipedia. But this is just version v0.1, and Musk says, “Version 1.0 will be 10x better.” I was quite surprised to see there was no article for “apartheid,” but if you looked up “white genocide theory” — one of Musk’s ideological obsessions and the center of many unhinged Grok rants earlier this year — you’ll find an article that bemoans academia’s tendency to “relegate the theory to fringe conspiracy status despite the observable data on population trajectories.” Wikipedia, for what it’s worth, refers to this theory as a conspiracy theory in its article’s title. To understand Grokipedia, you have to know its origin story, which can be traced back to a tweet from President Donald Trump’s AI czar, venture capitalist, and longtime Elon pal David Sacks. The September 29 tweet read, in part, “Wikipedia is hopelessly biased. An army of left-wing activists maintain the bios and fight reasonable corrections.” It really feels like Sacks was tweeting directly at Musk, who has been ramping up his criticism of Wikipedia all year. Last Christmas Eve, Musk told his followers to “Stop donating to Wokepedia,” claiming that the organization was overspending on diversity, equity, and inclusion. Musk has called Wikipedia “an extension of legacy media propaganda,” and announced that xAI would build Grokipedia in response to Sacks’s tweet. The blurry jpeg theory of the internet When I heard about Grokipedia’s launch, I immediately thought of what I call “the blurry jpeg” piece that the New Yorker published in 2023. Written by the science fiction author Ted Chiang, the article does a great job explaining the then-unfamiliar concept of large language models, how they generate synthetic text based on real writing, and whether they can accurately communicate genuine knowledge. The blurry jpeg he talks about refers to the problem of uploading an image to the web, which requires compression; downloading the lower-resolution version; and doing that over and over again. Eventually, the image becomes unrecognizable because so much information is lost in the process of copying a copy. This has been happening to information on the web from its earliest days. And in a sense, this idea of downloading, remixing, and redistributing content has been what’s made the web so fun. Blogging, which got me and many others started in journalism, often amounts to reading what’s happening online, processing the ideas, and repackaging them for a particular audience, sometimes with a slant and usually in a post shorter than the source material. Tweeting, a descendant of blogging, compressed those posts even more, but the medium retained the basic goal of democratizing and accelerating the spread of knowledge and ideas online. Wikipedia, in its most basic form, does this, too. But inevitably, as with jpegs or sheets of paper sent through old-fashioned Xerox machines, making copies of copies blurs out certain details, often ones that seem less important. The compression makes it easier to share the data but harder to find your way back to the original source. That seems to be happening with Grokipedia. It’s not clear exactly how xAI built it, but Matteo Wong offers a theory over at the Atlantic. The world’s richest man bought Twitter and welcomed the most extreme right-wing voices onto the platform. “Then he fed this repository of conspiracy theories, vitriol, and memes into an AI model already designed not to shy away from controversial or even hateful views,” Wong writes. “Finally, Musk used that AI model to write an anti-woke encyclopedia.” In other words, there were humans involved in building Grokipedia, but it was probably mostly Musk. It’s like he’s uploading his rage, downloading the replies from his far-right followers, and reuploading them into an AI that’s organizing the ideas into an encyclopedia: Grokipedia. In contrast, Wikipedia is not perfect and, largely due to its open platform, is also filled with misinformation at any given moment, but there’s a human-centric system in place to take care of it. What fills me with dread is the idea that the blurry jpeg analogy, while worrisome, misses the point. Back in the months after ChatGPT launched, we didn’t know if this technology would lead to more good things than bad. Now, with the rise of AI slop and sites like Grokipedia, we’re seeing a lot of bad. It seems inevitable that generative AI and its many offshoots, including AI-generated encyclopedias, will reproduce the contents of the internet — and, in a sense, knowledge itself — in a way that’s lower resolution, lower quality, blurry. Slop is just one example. What I’m really worried about is what happens when that slop gets weaponized, trained for a specific purpose — say, to radicalize a larger portion of the online population — and starts chipping away at the integrity of institutions dedicated to preserving knowledge on the problem-filled web, like Wikipedia. Elon Musk won’t make a better Wikipedia. But he has plenty of bots trained on the goal of making people trust Wikipedia less. The blurrier Musk’s version of reality gets, the more dangerous. A version of this story was also published in the User Friendly newsletter. Sign up here so you don’t miss the next one!
 
                            
                         
                            
                         
                            
                        