New Healthcare Study Warns About The Hidden Dangers Of AI At Work
New Healthcare Study Warns About The Hidden Dangers Of AI At Work
Homepage   /    technology   /    New Healthcare Study Warns About The Hidden Dangers Of AI At Work

New Healthcare Study Warns About The Hidden Dangers Of AI At Work

Janice Gassam Asare,Ph.D,Senior Contributor 🕒︎ 2025-10-29

Copyright forbes

New Healthcare Study Warns About The Hidden Dangers Of AI At Work

A recently published study provides meaningful insights into how AI tools can exacerbate existing racial biases. A recently published study provides meaningful insights into how AI tools can exacerbate existing racial biases. Researchers sought to examine racial bias in psychiatric diagnoses and treatment across four leading large language models (LLMs) including Claude, ChatGPT, Gemini, and NewsMes-15. The study explored ten psychiatric patient cases representing five diagnoses and there were three conditions: race-neutral, race-implied, and race-explicitly stated. The researchers assessed the recommendations and treatment plans provided by the different LLMs and two psychologists examined and evaluated the outputs for bias. The results revealed that LLMs tended to suggest inferior treatments when the patient race was explicitly or implicitly indicated. The results of the study provide some important findings for the healthcare industry. A 2025 study indicates that 65% of U.S. hospitals use artificial intelligence or predictive models to identify high-risk patients, for follow-up care recommendations, to monitor health, recommend treatments, and for things like billing and scheduling. AI tools that are utilized for convenience, speed and greater accuracy may simultaneously be exacerbating existing biases. With more and more hospitals relying on AI for various tasks, it is imperative to understand the limitations of this type of technology. Outside of healthcare, there are wide implications that must also be considered; any workplace that leans on AI tools to increase efficiency must be aware of these constraints. Many organizations and institutions now rely on AI tools during the hiring process to screen resumes, interview candidates and review job application videos. An AI tool that is used, for example, to assess the transcript of either a job interview or a job application video, may provide a lower evaluation or rating for a job candidate that uses African American Vernacular English (AAVE) or a dialect that is specific to a particular marginalized ethnic group. AI models are trained on Standardized American English so anything outside of this may be deemed as unprofessional and may unfairly and unjustly bias job candidates. A 2024 Nature research study confirmed this, revealing that LLMs exhibited covert racism called dialect prejudice towards speakers of African American English. To address these limitations, there are several actions that can be taken by organizations and institutions that rely heavily on AI tools for workplace decision-making. First, there should be more transparency regarding how AI tools are used to make decisions. Employees, prospective employees and customers should be provided with clear information regarding how AI tools are being used. Companies should also demand frequent audits of AI tools to ensure equity and fairness. Organizations can also request information about the datasets that are used to train the AI tools that are being utilized. It’s important to keep in mind that AI works best with human oversight; there should not be heavy reliance on these tools given how deeply entrenched their biases are. The companies that have created these AI tools must also be held accountable. The public should demand transparency and should push for necessary changes to ensure that these tools address bias at the root. During the creation process, guidance and advice from experts in equity and ethics should be incorporated into the design and development of the AI tool. When technology is created, there is an understandable focus on accuracy, with equity often taking a backseat. To ensure AI tools are useful and effective, fairness must be prioritized and embedded into the fabric at inception. MORE FOR YOU Editorial StandardsReprints & Permissions

Guess You Like

High-tech cleaning gadgets that actually make life easier
High-tech cleaning gadgets that actually make life easier
LG Styler Smart Steam Closet ...
2025-10-20