Copyright wired

And on the rare occasion that I am asked to complete some bot-deterring task, the experience almost always feels surreal. A colleague shared recent tests where they were presented with images of dogs and ducks wearing hats, from bowler caps to French berets. The security questions ignored the animal’s hats, rudely, asking them to select the photos that showed animals with four legs. Other puzzles are hyper-specific to their audience. For example, the captcha for Sniffies, a gay hookup site, has users slide a jockstrap across their smartphone screen to find the matching pair of underwear. So, where have all the captchas gone? And why are the few existing challenges so damn weird? I spoke with cybersecurity experts to better understand the current state of these vanishing challenges and why the future will probably look even more peculiar. Bot Friction, Human Frustration “When the captcha was first invented, the idea was that this was literally a task a computer could not do,” says Reid Tatoris, who leads Cloudflare’s application security detection team. The term captcha—Completely Automatic Public Turing test to tell Computers and Humans Apart—was coined by researchers in 2000 and presented as a way to protect websites from malicious, nonhuman users. The initial test most users saw online contained funky characters, usually a combo of warped letters and numbers you had to replicate by typing them into a text field. Computers couldn’t see what the characters were; humans could, even if most of us had to squint to get it right. Financial companies like PayPal and email providers like Yahoo used this iteration to ward off automated bots. More websites eventually added audio readouts of the correct answer after receiving pressure from Blind and low-vision advocacy groups, whose members were indeed humans browsing the web but could not complete a vision-based challenge. What if, rather than just a test to keep out bots, the challenge could generate useful data? That was a core idea behind the release of reCaptcha in 2007. With reCaptcha, users identified words that machine learning algorithms could not read at the time. This sped up the process of transferring print media into an online form. The tech was quickly acquired by Google, and reCaptcha was instrumental in the company’s efforts to digitize books. As machine learning capabilities improved—and they learned to read funky text—online security checkpoints adapted to be more difficult for malicious bots to circumvent. The next iteration reCaptcha challenges included grids of images where users were asked to select specific options, like photos containing a motorcyclist. Google used the data collected here to improve its online maps. At the same time as the difficulty of online security challenges ramped up, so did users' frustration as they were asked increasingly complex and esoteric questions to prove their humanity. Online users were asked to select all of the “smiling dogs” in image labeling questions from hCaptcha, a privacy-focused alternative to Google’s service. How baffling! “Completely Invisible” Google’s launch of reCaptcha v3 in 2018 was a major shift toward decreasing how often people see challenges at all online. “Instead of interrupting a user, our technology analyzes signals and behavior during an interaction to generate a risk score on which actions can be taken by the website owner,” says Tim Knudsen, a director of product management at Google Cloud, in an email to WIRED. This switch, which accurately sniffed for which users were flesh and which were silicon, made this generation of bot-blocking tech “completely invisible” for most web surfers. A few years later, in 2022, Cloudflare dropped Turnstile, another reCaptcha alternative. It was an additional major move away from human-completed tests and toward pattern-based usage analysis. Similar to the standard version of reCaptcha, Turnstile can be added to websites for free. You might not remember the name, but you’ve likely encountered one of these Turnstile challenges before. It’s the random-seeming request to click on a box to prove you’re human. On the user end, Turnstile appears sometimes as a basic checkbox, but it’s more complicated than that. “Clicking the button doesn't at all mean you pass,” says Tatoris. “That is a way for us to gather more information from the client, from the device, from the software to figure out what's going on.” After gathering data, then a decision is made about whether the user is allowed to access the site. Leading companies have a clear reason for the gratis implementation of their security software. “Cloudflare gives Turnstile away for free to the whole internet because we want more training data,” says Tatoris. “We see 20 percent of all HTTP requests across the internet. So, getting that massive training data set helps us know what a human looks like on the page versus what a bot does.” Google’s Knudsen says he anticipates visual challenges to stick around but continue becoming a less critical and less frequent aspect of website protection. Even though most bot-deterring methods no longer need much input on the user end, if any at all, the unhinged captcha lives on, even as a rarity. Another, more recent, entry into the captcha game is Arkose Labs, and the security company’s paid MatchKey service isn’t necessarily about blocking bots at all. “We have challenges which are what you would define as a captcha as one of our products,” says Kevin Gosschalk, Arkose’s CEO and founder. “But the intent is to be cost-proofing, not human-proofing.” The goal of his challenges is to make it so expensive to attack a website that it’s no longer a profitable endeavor. The puzzles are tailored to disincentivize attacks within specific contexts. For example, if someone is getting paid to solve security challenges manually, Arkose may detect that and serve them a time-intensive task to complete and occasionally reject their answer no matter what. As part of Arkose’s “cost-proofing” measures, the company also sells a version of MatchKey designed to thwart attacks coming from people using large language models or other generative AI tools. “You defeat an LLM by giving it novel, unusual things that it has no business knowing or have previously been asked,” says Gosschalk. He gives an example of having users answer questions about a strange collage, like a fake photo of a frog in a pond that has the head of a bird and the reflection of a horse. The mishmashed image is not something that an AI model has likely seen before. For the odd cases when you do still encounter an online security challenge in the coming months and years, don’t expect the puzzles ever to return to that initial iteration. Goodbye distorted jumble of letters and numbers, I didn’t realize I’d miss you until you were already gone. Familiar challenge structures may also eventually go by the wayside. “While the classic visual puzzle is well-known, we are actively introducing new challenge types—like prompting a user to scan a QR code or perform a specific hand gesture,” says Google’s Knudsen. This allows the company to still add friction without confusing the user with an impossible task. The success of security measures like these is not wholly measured in stopping existing threats to websites. It hinges on how fast companies can detect and prevent the ever-shifting waves of nascent attacks. “We know that the new detections will have to spin up two years from now are totally different from what we have in place now,” says Cloudflare’s Tatoris. Whatever comes next for security challenges, with their increasing weirdness and behind-the-scenes signals, I just hope I’m always able to prove my humanness online. I’ve never been that good at taking tests.