I watched Frankenstein, and I’m worried we’re turning AI into a monster
I watched Frankenstein, and I’m worried we’re turning AI into a monster
Homepage   /    science   /    I watched Frankenstein, and I’m worried we’re turning AI into a monster

I watched Frankenstein, and I’m worried we’re turning AI into a monster

🕒︎ 2025-11-10

Copyright Digital Trends

I watched Frankenstein, and I’m worried we’re turning AI into a monster

AI is slowly becoming a monster that we might not be able to control. Like the hideous creature unleashed by Dr. Frankenstein in the new Netflix movie by Guillermo del Toro, we don’t have a good handle on what we’ve created or what kind of power it can wield. Here’s an example of how this all works to help illustrate the point: Recommended Videos A humanoid robot named Iron walks out onto a well-lit stage. The bot has a slight swagger, like a model strutting on a catwalk. As this Frankenoid monster prances around, the CEO of the robotics firm Xpeng — which is known more for making electric cars in China than robots — explains how the Iron’s gesticulating hand is endowed with 22 degrees of movement. Dressed in a skin-tight white jumpsuit and wearing trendy athletic shoes, the robot seems strangely human — like Frankenstein’s monster brought to life. After the demo, social media users — the modern equivalent of people carrying pitchforks in the town square and building a funeral pyre — were quick to pounce. “I think we’ve seen this stunt before,” said one commenter. Another explained how the bot is obviously a real person; a takedown video even explained how the Iron bot has a spine and a bra strap. “Tesla’s robot can’t walk this smoothly — there’s no way this is possible,” said one user. Later, the CEO released a follow-up video showing technicians cutting off part of the jumpsuit to reveal what is obviously the robot’s leg — e.g., an actuator and metal bone. You can almost hear gasps from the crowd. Like a modern equivalent of Prometheus, artificial intelligence — seemingly handed down from on high — has become a monster…and we’re the townspeople. The movie Frankenstein is a timely reminder about the dangers of modern science (and technology) advancing too quickly. The original novel of the same name by Mary Shelley, released in 1816, was prescient in a way that the author could have never predicted. As I watched the movie, I couldn’t help but think how artificial intelligence is also a monster of innovation, that we’re creating tools and content that we don’t fully understand. Worse yet, the AI images and videos that are constantly proliferating on social media and online are almost indistinguishable from the content that real people have meticulously labored over using apps like Adobe Photoshop. Known as AI slop, this generative content has invaded every corner of the web and will be impossible to wipe clean. Most of the content is poorly labeled as AI and there are few restrictions on how the content can be used or how to protect real content creators. We’ve already created the monster; now we have to figure out how to control it. Setting guidelines for our AI creations AI has outpaced the ability to introduce guardrails. Here’s another glaring example. Tilly Norwood (shown above) is an AI actress that looks indistinguishable from a real person, at least at first glance, and is a good example of how AI has advanced too quickly. The company behind Tilly has noted how they are looking for an agent for the AI actress without sharing too many details. The industry went a little ballistic, suggesting that we’re not prepared for a world where an AI actress stars in a movie — how something without a soul or a heartbeat could collect a paycheck instead of a real person who has actual training and experience in the field. In the Frankenstein movie, there’s an eerily similar scene where Dr. Frankenstein demonstrates how he can reanimate the arm, torso and brain of a deceased man. Does that mean the creation is now a real person? Does it have a soul? Those in the audience watching Dr. Frankenstein’s demonstration ask those same questions, suggesting the abomination is premature and dangerous. We should be asking the same questions about AI. The advancements are coming fast and furious, but we haven’t set proper guidelines. We don’t know what AI is capable of doing yet or what the future holds — e.g., how the innovations will change what it means to work and live in modern society. Meanwhile, real content creators, knowledge workers, writers, artists, and filmmakers are the ones who will suffer — and they already are. Accenture recently eliminated 11,000 jobs, pinpointing roles that were not keeping up on AI as an ancillary tool. Are we okay with that? Do we even know how AI technology will impact our own productivity and job performance? Those who are pro AI innovation — and I am one of them — tend to talk about a supporting role. With writing, an AI can help us fact-check and proofread, which are more mundane tasks. Yet, it’s far too easy to let the AI do a complete rewrite — or even compose the original piece of writing from scratch. Today, other than using an AI detection app like GPTZero to find out if a human was involved, there are no guardrails or guidelines. It’s time to set guardrails now before the abominations of AI become too powerful and ubiquitous. AI slop is hurting content creation; chatbots can hallucinate and dole out incorrect information; humanoid bots can do household chores but seem eerily sentient already. Like anything new and innovative, there is an illusion that AI can change how we work and even entertain us in ways we’ve never imagined. Many of the AI tools we use now are practical and helpful, but we’re far from understanding the ramifications for our mental health, how to pivot to make sure people still have gainful employment, or how to address ethical issues. Should humanoids have rights and privileges? Another important topic to consider, one that Frankenstein also brings to light, is whether the new creations should have rights and privileges, similar to humans. I mentioned the technicians cutting open the Iron humanoid’s leg because, in some ways, it was another example of not having guardrails. While it was helpful to see behind the veil, it wasn’t immediately obvious whether the Iron humanoid was an actual robot or a human. It’s becoming more and more obvious that humans won’t be able to tell the difference soon. For example, when Tesla demonstrated recently how the Optimus robot can perform mundane tasks, it wasn’t obvious that there was a human operator involved. The bots seemed highly capable, but it was only later revealed that they were not at all autonomous. Another issue Frankenstein explores is the notion of true evil. Slight spoiler alert here: By the end of the film, you will start questioning whether it’s the creator or the creature that is the true monster. We need to ask similar questions about AI, especially in relation to mental health. People are constantly talking to chatbots about personal issues, yet there is very little exposure to those interchanges or any guidelines about the advice given. When an AI misleads someone and they harm themselves, should we blame the AI bot itself or the AI bot creator? ChatGPT, for example, is a large language model. It parses meaning and intent from the user based on a database of possible outputs. Behind the curtain, there are real human engineers assembling the code that makes ChatGPT possible. Do we know if those engineers can be trusted? How much access do we have to their process to tell if it’s legitimate? It’s just a matter of time before someone buys one of these expensive humanoids and does something terrible to one of them, probably as a publicity stunt. In the end, as we see new AI innovations on a daily basis, we need to catch up to the technology as quickly as possible — Frankenstein has already been unleashed.

Guess You Like

Blue Jays in World Series for first time since 1993
Blue Jays in World Series for first time since 1993
NEW YORK -- NEW YORK (AP) — Wh...
2025-10-21
Opinion: The doctor who hates medicine
Opinion: The doctor who hates medicine
Dr. Casey Means, President Tru...
2025-11-01
Quantum computing stocks: IonQ, Rigetti, D-Wave up; Trump news
Quantum computing stocks: IonQ, Rigetti, D-Wave up; Trump news
It’s been a wild 24 hours for ...
2025-10-23