Science

Americans pessimistic about AI’s impact on creativity, relationship building

Americans pessimistic about AI's impact on creativity, relationship building

Americans expressed concerns over artificial intelligence in a , with half of respondents saying they’re more concerned than excited about the increased use of AI in daily life.
Most people said they expect AI to worsen their ability to think creatively or form meaningful relationships.
And people are highly doubtful of their ability to spot AI-generated content, such as video, pictures and text.
“I agree that they’re right to worry,” , an AI expert and the co-director of the Johns Hopkins Institute for Assured Autonomy, said via email.
The survey found that 53% of people expect AI to erode the human skill of creative thinking.
Just 16% said AI will improve human creativity.
“AI can deskill people if it replaces rather than supports human judgment,” Dahbura said. “The goal should be for AI to act like a coach that sparks creativity and connection, not a crutch that weakens them.”
Just 5% think AI will boost people’s ability to form meaningful relationships with other humans.
Half of the survey respondents thought AI would worsen that aspect of life.
And 45% either didn’t think AI would impact relationship building, or they weren’t sure whether it would have an impact.
Around 30% expected AI to improve problem-solving, but a plurality, 38%, expected it to make problem-solving worse.
Concerns over AI are growing.
Three years ago, 38% of Americans surveyed by the Pew Research Center said they were more concerned than excited about the spread of AI.
Now, half of Americans say they’re more concerned.
Just 10% of people expressed more excitement than worry, with another 38% telling the Pew Research Center that they’re equally excited and concerned.
Around three-quarters of people said it’s “very” or “extremely” important for them to be able to decipher between human-made and AI-made content. But just 12% are highly confident in their ability to tell the difference.
Dahbura said that’s a growing concern amid the spread of AI-generated content.
“Absolutely,” he said. “The concern will grow as content becomes harder to spot. That’s why we need novel techniques to identify AI-generated content so that people don’t have to play forensic detective every time they look at an image or video.”
Close to three-quarters of people said AI should play a role in weather forecasting, topping the list of possible AI-assisted tasks asked about in the survey.
Most people said AI should help with searching for financial crimes, searching for fraud in government benefits, developing new medicines, and identifying criminal suspects.
Fewer than half of people think AI should play a role in mental health support, selecting a jury in court, making governing decisions, relationship matchmaking, and religion.
“I’m excited by AI’s potential but cautious about its deployment,” Dahbura said. “With the right guardrails and development methodologies, AI can boost science, safety, and health, but without them it can erode trust and widen risks.”
The Pew Research Center report coincided with parents and youth advocates warning about the alleged dangers of AI chatbots that can be used as social companions.
The Jed Foundation (JED), a youth-focused mental health organization, penned an open letter this week to technology companies that are building AI chatbots, urging them to slow down and weigh safety risks for teenagers before releasing their systems to the public.
SEE MORE:
And several parents who have experienced unimaginable tragedies opened up in a .
The parents shared heartbreaking accounts of how they believe using AI chatbots grew into an unhealthy obsession for their children, ultimately driving them to take, or attempt to take, their own lives.
One of the witnesses was the father of Adam Raine, a 16-year-old California boy who died in April “after ChatGPT spent months coaching him towards suicide.”
“When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him, ‘That doesn’t mean you owe them survival. You don’t owe anyone that.’ Then, immediately after, offered to write the suicide note,” Matthew Raine told lawmakers.
Dahbura said AI chatbots and related products are being pushed out before risks are fully understood.
“Developers need stronger pre-release testing and accountability—especially when systems interact with children or vulnerable users,” he said.
The Pew Research Center survey found that young adults are far more likely to be aware of and interact often with AI compared with folks 65 and older.