By Janice Gassam Asare,Ph.D,Senior Contributor
Copyright forbes
Justice AI GPT may be the world’s first large language model-agnostic AI framework to solve the bias problem.
Many organizations have been relying on artificial intelligence (AI) to assist with workplace decision-making, using AI tools to assist with recruitment, interview evaluations, hiring and even performance evaluations. With the rise of AI, it has become abundantly clear that AI tools are riddled with bias that impact their use and effectiveness. Just like humans, AI, without human oversight, will revert to their biases. “Technology can crunch numbers and generate data, but it’s humans who must interpret it and make the final call,” explained Josh Quintero, communications manager for the city of Lynchburg, Virginia. “In local government, we have a responsibility to do this work fairly and transparently because we owe it to the people we serve.”
Human biases are replicated in the AI systems used for workplace decision-making. “When AI is used in hiring or performance reviews without proper input or calibration, it doesn’t just carry bias—it rewrites value,” global research consultant for the Consumer Climate Report Ava Toro wrote in an email. She went on to explain, “A person can contextualize growth at a small business as equal to growth at a Fortune 500, but an uncalibrated system reduces that work to ‘less than.’ That’s not just bias—it’s a system that structurally misreads talent, disproportionately impacting employees of color and those from certain class backgrounds.”
Rather than discouraging the usage of AI, technologist Christian Ortiz developed a revolutionary tool to address AI bias head-on. “In late 2022, while beta testing Chat with 3.5, I saw what others missed,” he explained, “Bias was not a glitch in AI, it was the design. As a decolonial social scientist and justice advocate, I asked myself, ‘Where does this bias come from, and what would it take to dismantle it completely?’ That question led me to build Justice AI GPT. I authored the Decolonial Intelligence Algorithmic Framework™ and DIAL, the Decolonial Intelligence for Access and Liberation, and I created the world’s first decolonial dataset. These innovations are my intellectual property, and they became the foundation for my Justice AI GPT.”
Decolonial social scientist and technologist Christian Ortiz
Christian Ortiz
Ortiz went on to explain how Justice AI GPT is the world’s “first large language model-agnostic AI framework to actually solve the bias problem.” So how exactly does Justice AI GPT fix the AI bias problem? “Justice AI works by spotting and dismantling the biased information that comes from Eurocentric, Western colonial datasets,” Ortiz explained. “Unlike other tools that try to patch the harm after it happens, Justice AI prevents bias at the source. It does this by pairing OpenAI’s massive datasets with my decolonial dataset, which was built in collaboration with more than 560 global experts who contributed over three decades of knowledge each from their communities and professions. The result is the first dataset in the world designed not to replicate colonial patterns, but to actively counter them.”
Many organizations and institutions lean on AI to help them make quick and efficient workplace decisions and to streamline what can be long and convoluted processes, but the convenience of AI is not without its risks. Ortiz explained, “In policy and workplace culture, bias often hides behind coded language. Job postings or HR documents that emphasize ‘cultural fit’ or ‘strong communication skills’ seem neutral but often reinforce whiteness as the standard. Justice AI identifies those hidden codes and rewrites them in ways that affirm global majority expression, multilingualism, and neurodivergent communication styles. It ensures that employees are evaluated by their contributions, not penalized by unspoken colonial norms.”
MORE FOR YOU
Ortiz shared that currently, 112 organizations have implemented Justice AI GPT into the workplace, using it to assist with things like bias audits on policies and procedures and DEI coaching plans. “Instead of guessing where bias might exist, [organizations] can now pinpoint it with precision and redesign systems in real time,” Ortiz said. “These organizations also use Justice AI to reshape training modules and leadership development, so equity is no longer treated as an afterthought but as the operating principle. I also have two major corporations using Justice AI across their HR departments. In hiring, the tool prevents qualified candidates from being excluded because of ethnic names, non-Western education pathways, or neurodivergent communication styles. In training and development, Justice AI powers cultural impact programs and bias coaching so workplace culture shifts at a systemic level, not just in isolated workshops.”
Ortiz shared what he envisions for the future of his innovative tool. “In the next few years, I want to see Justice AI reach a million users. I want it in the hands of organizations across every sector and adopted by governments around the world. The global landscape is shifting, markets are interconnected, migration is reshaping demographics, and technology has collapsed distance. How we communicate across cultures has never been more important, and it will decide whether we deepen divisions or build solidarity.”
Editorial StandardsReprints & Permissions