Many science fiction depictions of artificial intelligence paint the technology in a dim light — often with doomsday-leaning overtones. And there is indeed a long and ever-growing list of real concerns about AI’s negative impact on the working world. Now we can add another troubling item to this list, courtesy of some international research that shows people who delegate tasks to AI may find themselves acting more dishonestly than if they didn’t engage with the technology. This may give any company leader plenty to think about as they consider rolling out the latest, shiniest, smartest new AI tool in their company.
The study involved over 8,000 participants and looked very carefully at the behavior of people who gave instructions to an AI, delegating their tasks to the tools, and to people who were tasked with acting on the instructions, Phys.org reports.
The results were straightforward. People who gave instructions to AI agents to carry out a task were much more likely to cheat — especially when the tools had an interface that involved setting generic high-level goals for the AI to achieve, rather than an explicit step-by-step instruction. Think clicking on graphical icons to control an AI, versus actually typing in “use this information to find a way to cheat your way to the answer” to a chatbot like ChatGPT.
Only a tiny minority of users, between 12 and 16 percent, remained honest when using an AI tool in the “high level” manner…the temptation being much too great for the remaining 95-ish percent of people. And even for AI interfaces requiring explicit instructions only about 75 percent of people were honest.
Featured Video
An Inc.com Featured Presentation
One of the paper’s authors, Nils Köbis, the chair in Human Understanding of Algorithms and Machines at the University of Duisburg-Essen, one of Germany’s largest universities, wrote that the study shows “people are more willing to engage in unethical behavior when they can delegate it to machines — especially when they don’t have to say it outright.”
Essentially the research shows that the use of an AI tool inserts a moral gap between the person being asked to achieve a task, and the final goal. You can think of it, perhaps, as an AI user being able to offset some of the blame, and maybe even the actual agency, onto the AI itself.
That so many people in the study found it acceptable to use AI to cheat in completing a task may be concerning. But it does resonate with reports that say basically everyone currently studying at a college level is using AI to “cheat” at their educational assignments. The technology is already so powerful, so ubiquitous and in many cases free or relatively low cost to use that the temptations must be enormous.
But there may be long-term repercussions. For example, in April a report said teachers warned that students are becoming overly reliant on AI to do the work for them. The upshot, of course, is that the students aren’t necessarily learning the material that is in front of them — tricky tasks like writing essays or solving math problems aren’t just lessons in discipline and following rules, they’re necessary to cement knowledge and understanding into the learner’s mind.
Similarly, in February Microsoft rang an alarm bell about the abilities of young coders who are already reliant on AI coding tools for help. The tech giant worried this is eroding developers’ deeper understanding of computer science, so that when they face a tricky real-world coding task, like solving a never-before-seen issue, they may stumble.
What can you do about this risk when you’re rolling AI tools out to your workforce?
AI is such a powerful technology that experts caution workers should be given training on how to use it. Alongside warning your staff that AI tools could allow sensitive company data to leak out if used improperly, or allow hackers a new way to access your systems, maybe you should also caution your workers about the temptation to use AI to “cheat” at completing a task. That could be as simple as asking ChatGPT to help with a training problem, or it could be as complex as using an AI tool in short-cutting ways that could open up your company to certain legal liabilities.