Artificial Intelligence and Equity: This Entrepreneur Wants to Build AI for Everyone
Artificial Intelligence and Equity: This Entrepreneur Wants to Build AI for Everyone
Homepage   /    health   /    Artificial Intelligence and Equity: This Entrepreneur Wants to Build AI for Everyone

Artificial Intelligence and Equity: This Entrepreneur Wants to Build AI for Everyone

🕒︎ 2025-10-22

Copyright CNET

Artificial Intelligence and Equity: This Entrepreneur Wants to Build AI for Everyone

When John Pasmore watched his 16-year-old son playing around with AI, he recognized "how much bias" was built into the system. On platforms like Twitter and X, Pasmore witnessed people react to AI-generated racist comments with humor. But the lack of response from leaders in the space was the most deafening thing. "That was the moment I knew I had to build something different," he says. Pasmore is the serial entrepreneur who founded Latimer AI. He's also its CEO. The platform is named after inventor Lewis Latimer and was built to access multiple large language models powered by its own curated database -- all in an effort to reduce harmful and inaccurate responses about and for Black and Brown people. When I spoke with Pasmore via Google Meet, he was in Greensboro, North Carolina, where hours earlier, he had been sharing Latimer AI with administrators at North Carolina Agricultural and Technical State University so they could see the platform in action and compare Latimer's performance with other tools they're considering for their students. His takeaway: "People don't just want tools. They want accuracy." Pasmore began his career in media and publishing as a co-founder of Oneworld, a late '90s youth culture and hip-hop publication that approached social and political issues from a multicultural perspective. After receiving a computer science bachelor's degree from Columbia University in 2018 -- and surveying the bias-embedded AI landscape as a father -- Pasmore was motivated to launch Latimer AI in 2023, a product to be more accurate andmitigate bias while performing like any other LLM on the market. The reason why generative AI LLMs are biased makes sense, even if it's unfair and potentially dangerous. Data is collected from overrepresented groups, which means that the model learns patterns from what it sees most often. For example, if you insert a leadership-level job position into an AI chatbot and ask it to describe the type of person best suited for it, it will likely describe a man. This is due to historical and social biases, but also, feedback loops from humans absorbing the LLM's outputs and creating content that mirrors these, thereby reinforcing the harm it creates. While steps have been taken to mitigate this bias, ultimately, it comes down to history -- which means systemic inequalities that perpetuate. How Latimer AI works Latimer AI is designed to help you preserve critical thinking and curiosity while using artificial intelligence. The platform uses a RAG (retrieval-augmented generation) model and its own information database, which means that Latimer can look up answers from external documents and its own database, compare answers and generate an answer in response. It works alongside up to 10 different LLMs, including those used by ChatGPT, Claude, and Google's Gemini, which pair with the Latimer AI database. This, in turn, provides more inclusive answers and culturally fluent responses. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Latimer AI is a web-based application with API integration capabilities that has quickly grown beyond individuals to include classroom workflows, educational institutions, businesses and organizations. When you sign up, you have the option to sign up as a student, educator, business or developer, which all vary in price and features, including a free plan that has fewer models and is only available for up to 10 interactions a day. For API access, pricing starts at less than 10 cents for 1,000 tokens across both input and output. You use Latimer AI like you would ChatGPT or other chatbots, and the platform will search the web to ensure you get the most current information in the responses. You can also integrate Latimer AI into an existing application through its API. The goal of Latimer AI is to develop a nonpartisan perspective that both delivers accurate responses and is also empathetic to real-world experiences. "I look at it as being more empathetic," Pasmore tells me. "If you ask [ChatGPT], there's a very sterile definition of environmental racism." The ChatGPT response to this question was concerned and justice-focused, including a bullet-point list with short, snappy takeaways defining the phrase. In contrast, Latimer AI gives you several examples to help both understand what we're talking about and also humanize it. "[Latimer AI] includes the human toll of environmental racism, as opposed to just the definition of it," Pasmore says. Pasmore shares his screen to showcase an example comparing another LLM output and Latimer AI. The difference was stark -- and to his earlier point, the Latimer AI example contained significantly more context, examples and education, but also, more warmth infused into its response. A sense of humanity. One part of the Latimer AI response reads, "So while I don't have emotions, the consensus in ethics and public health frames environmental racism as both harmful and morally wrong." Pasmore sees Latimer AI as more than just a project, but also a responsibility. "There are other things that are important besides efficiency and things that, in my opinion, satisfy the capitalist narrative," he says. "There's more to the story. The human story." Pasmore didn't build Latimer AI to compete with OpenAI, but rather, ensure younger generations -- especially Black and Brown kids -- don't grow up thinking the machine's version of history is the only truth. "I want them to ask better questions," Pasmore says. "The whole point is to make curiosity a muscle again." That resonates for me, a Black kid-turned-woman. Perhaps our interview was proving his point in real time. The effect and future of AI It's hard to not take chatbots' answers as the truth when you're using them every day, regardless of the model accessed. Their answers seem so authoritative, and they have appealing qualities. Anthropic's Claude often makes me feel emotionally valued and understood, while ChatGPT helps me become more strategic and business-minded. Still, I am aware that answers to more nuanced, complex questions could fluctuate depending on what I said -- or how I asked the question. I also know that AI isn't neutral; it's a reflection of who builds it. But can technology, as innovative as it is, close the gap that humans behind it created? Pasmore says yes, followed by, "AI can and should be accurate and inclusive. That's not a technical limitation, but it is a choice." From my time spent with Pasmore, I think about longevity of cultural history in a technological age, legacy comes to mind… but so does cultural preservation. How do you control a narrative that exists within a system -- one that supports both innovation and bias? According to Pasmore, we have to demand accuracy. "I've spent my life in media and technology. And every era has had the same question, 'Who controls the narrative?,'" Pasmore says to me during the final moments of our interview. "Latimer AI is my way of answering that. If I do this right, it will be a historical corrective, a record that can't be rewritten." But for that to happen, we must start asking better questions about ourselves and the world -- including how technology is instrumental in shifting our answers to both. From Pasmore's point of view, it must start (and end) with AI for everyone.

Guess You Like

Post-Diwali Detox: 5 Healthy Drinks To Cleanse Your Body
Post-Diwali Detox: 5 Healthy Drinks To Cleanse Your Body
During Diwali, many indulge in...
2025-10-21
YouTube's new timer wants to save you from yourself
YouTube's new timer wants to save you from yourself
But now, YouTube wants to help...
2025-10-23