Other

Is AI the New Frontier of Women’s Oppression?

By Scarlett Harris

Copyright wired

Is AI the New Frontier of Women’s Oppression?

Since then, the sexual harassment of women has encroached into online spaces, including Bates’ own experience with being the victim of deepfake pornography, which prompted her to write her new book, The New Age of Sexism: How AI and Emerging Technologies Are Reinventing Misogyny, published September 9 by Sourcebooks.
While gender-based violence is still usually perpetrated by people close to the victim, the quick, easy, and cheap if not free access to artificial intelligence “is lowering the bar for access to this particular form of abuse very rapidly,” Bates tells WIRED. “Any person of any age who has access to the internet can now … make hugely realistic abusive, pornographic images of any woman or girl who they have screengrabbed a fully clothed image of from the internet.”
Through firsthand research that involved speaking to tech creators and women who’ve been victimized by AI and deepfake technology, as well as using the chat and sexbots she decries, in The New Age of Sexism Bates charts the ways in which, if not properly and urgently regulated, AI is the new frontier in the subjugation of women.
“I know people will think ‘she sounds like a pearl-clutching, nagging, uptight feminist,’ but if you look at the top of the big tech companies, men at those levels are saying exactly the same thing that I am,” Bates says, pointing to Jan Leike, who departed OpenAI last year amid concerns over the company prioritizing “shiny products” over safety, as an example. “This warning call is being sounded by people who are embedded in these companies at high levels. The question is whether we’re prepared to listen.”
Bates also talks to WIRED about how AI girlfriends and virtual assistants can indoctrinate misogyny into kids, AI’s environmental footprint reaching women first, and how it never takes long for new technologies to devolve into the bigoted biases of its creators and users.
This interview has been condensed and edited for length and clarity.
WIRED: One thing that struck me about your book is it never takes long for new developments to devolve into misogyny. Do you think that’s fair to say?
Laura Bates: It’s a long, well-trodden pattern. We’ve seen it with the internet, we’ve seen it with social media, we’ve seen it with online pornography. Almost always, when we are privileged enough to have access to new forms of technology, there will be a significant subset of those which will very rapidly end up being tailored to harassing women, abusing women, subjugating women and maintaining patriarchal control over women. The reason for that is because tech itself isn’t inherently good or bad or any one thing; it’s encoded with the bias of its creators. It’s reflecting historical societal forms of misogyny, but it gives them new life. It gives them new means of reaching targets and new forms of abuse. What’s particularly worrying about this new frontier of technology with AI and generative forms of AI in particular is that it doesn’t just regurgitate those existing forms of abuse back at us—it intensifies them through further forms of threats, harassment and control to be exercised by abusers.
Of course it’s still intimate partners, former intimate partners, and people close to victims who perpetrate the majority of image-based abuse, but with deepfakes that widens the net of potential abusers and victims. Could you talk about that?
It enables access to victims in a breathtaking way. Any person of any age who has access to the internet can now, with relative ease, speed, and [expense] if not completely for free, make hugely realistic abusive, pornographic images of any woman or girl who they have screengrabbed a fully-clothed image of from the internet. It is lowering the bar for access to this particular form of abuse very rapidly. For example, we have situations across the US and UK and in Australia—we are seeing cases emerging from schools where children are accessing tools and using them to create highly realistic, abusive, pornographic images of their classmates at the age of 10 or 11.
The Online Safety Act in the UK and Australia’s ban on social media for kids under 16 have recently been rolled out. Do you think by the time the powers that be get around to instituting similar laws and safeguards around the use of AI for children, it’s going to be too late?
I wrote this book now because I felt like we’re on the edge of a precipice where these new forms of technology which are so untried and untested are being embedded and encoded in the very foundations of our future society. Even in the time since [I finished writing the book] we’ve seen an explosion of stories that are very clearly demonstrating the harms linked to these technologies. One of the harms is rolling out these products to public access in the pursuit of increasing profits without effective guardrails and regulation. We are seeing the negative impact across all of society but particularly on children who are accessing these technologies during their formative years when they are at their most vulnerable.
If we look at AI tools that are enabling them to interact with and create AI companions or girlfriends, we know that teenage boys are using them to customize them right down to eye color, breast shape, personality, name. She is then presented to them as if she is a sentient human being; a highly realistic avatar that is eternally available to them, utterly submissive and prepared to immediately jump into any sexual encounter that they want to role play without any discussion or [consent], including really extreme sexual violence.
What would you say to those who contend that sexbots and AI girlfriends can be used as tools to help people who struggle with interpersonal and romantic relationships?
This is not something, as the creators of these apps claim, that is going to help boys or men develop healthy relationship skills or alleviate [IRL] violence. There is no evidence to back up these claims. They are a marketing whitewash used by companies to put a philanthropic spin on the fact that they are selling straight-up misogyny. AI nudifying or undressing apps … don’t even work on most images of men’s bodies! It’s exploitation, pure and simple. It relies on the immense dehumanization and objectification of women. It relies on the presentation of a hugely misogynist idea of what a relationship is and should be, what a woman is and should be. She’ll never disagree with you, she’ll never answer back, she’ll never need time alone, she’ll never want to talk about her own life. She is utterly subservient and submissive and there to flatter your ego. The idea that that is good for women is, obviously, absurd. But the idea that this is helpful for men is insulting and reductive. Of course there are real issues with mental health struggles and loneliness, but none of those can be solved by misogyny in app form.
Enabling people to act out these fantasies is much more likely to lead to escalation of those crimes. There’s no evidence that it will be preventative.
Has the anthropomorphizing and, importantly, the feminization of virtual assistants like Siri and Alexa paved the way for techno-sexism?
Researchers estimate that 10 percent of conversations with virtual assistants are abusive in nature. When you look at the billions of people who use [virtual assistants] on a daily basis—the millions of children growing up in homes where they hear them being spoken to in that way on a daily basis—at scale, that has a massive social impact. It’s not to say that it’s the same as a sexbot that’s been manufactured with a rape setting, but it impacts our perception of secretarial and administrative tasks as things associated with femaleness. It’s frustrating that concerns have consistently been raised by feminists and women working in tech and been dismissed and derided while a community of mainly men made a huge profit.
What about the environmental impact of AI? How are the environmental crises caused by AI impacting women first?
We know that women are on the front lines of the global environmental crisis. They are often the first affected and the worst affected. A single ChatGPT search uses 10 times the energy of an average Google search. These connections aren’t being made, and when they are there’s this shrug of ‘well, that’s the price that we pay for progress.’ There is space here for a thoughtful dialog about what progress is and for whom. There seems to be this assumption that progress means astronomical profits being accumulated by a small number of white men in the global north, and the price everyone else has to pay for that is expected to be acceptable.
It goes beyond that in terms of the supply chain: the physical materials which are needed for some of these tools, where they’re coming from, and the people who are being exploited in those areas where those natural resources are concentrated. We also know that the women who work in the manual AI data labeling force tend to be underpaid, exploited, and sexually harassed.
There are layers and layers of abuse here that are glossed over by these shiny new products, many of which exist purely to give men new ways to harass and abuse women. You can draw a direct line from that to the devastation of our planet.
It’s so hard to see a clear path through all of this. What do you see as the way forward for AI and emerging technologies?
Potential solutions are there, but we do have to act quickly. The most important is regulation.
Globally, the picture is really concerning on that front. The Trump administration is pushing back against it, preventing big tech companies from trying to implement safeguards in their products, [which encourages] other governments around the world to follow suit. At the AI Action Summit in Paris earlier this year, even though 60 countries signed an agreement suggesting that AI should be open and accessible and safe and fair … the US refused to sign [and] the UK government also said that they wouldn’t be signing that agreement.
There are some positive signs out of the Council of Europe to develop what a thoughtful, reasonable framework for regulation would be in a way that wouldn’t stifle progress.
There’s always been a squeamishness and a fear around regulating tech because it’s impossible to regulate, that it’s impossible for tech companies to be in control, that it’s too big and too many people use their products and it’s just too fast moving so we just can’t expect them to. If we took that approach to [any other industry] it would sound ridiculous! If you’re making tens of billions of dollars of profit per year and you have the brightest minds in the world working for you and you’re creating what you purport to be the entire new way of the future, then yes, you do have the tools and funding necessary to make your platform safe. You’re just choosing not to.
It’s not about being anti-tech or anti-innovation. If anything, it’s the opposite. The potential for human benefit is so great that we cannot leave this to spiral out of control in the hands of a few power-hungry tech dudes.