Equity And Serving The Community With AI
Equity And Serving The Community With AI
Homepage   /    technology   /    Equity And Serving The Community With AI

Equity And Serving The Community With AI

Contributor,John Werner 🕒︎ 2025-10-31

Copyright forbes

Equity And Serving The Community With AI

Sometimes it’s easy to forget a main goal that innovators should be pursuing with AI – making sure that it serves the common good. There’s a push to include people – where the term “human in the loop” comes in handy – in AI decisions. But that’s just one aspect of the development of the idea that AI should be useful to a wide range of people, not just a handful of technologists. In other words, it’s vital to build humanity into systems, but that’s different than making sure that they are community-centered, and working on community solutions. “AI can only boost the under-resourced nonprofit world if we design it to serve the communities we care about,” write Michelle Flores Vryn & Meena Das at the Stanford Social Innovation Review. “But as non-profits consider how to incorporate AI into their work, many look to expertise from the tech sector, expecting tools and implementation advice as well as ethical guidance.” Here and elsewhere, people are looking to solve the challenges of a quickly changing world. And that happens with deep collaboration, between government, business, and academia. Doing the Work At a recent conference at Stanford, Carlos Ignacio Zavala of Whiteboard Advisors interviewed Vanessa Parli, Director of Research at HAI Stanford, and Katy Knight, President of the Siegel Family Endowment, about their experience at non-profits, promoting this kind of community-facing effort. “We can't just have computer scientists in their office building the tech by themselves,” Parli said. “We need doctors, lawyers, humanists, all these educators, all of these different disciplines and areas of expertise to ensure that the technology benefits everyone. And so I guess as far as where we're focusing, we focus on human-centered AI.” MORE FOR YOU “We look at ways to leverage tech for good,” Knight said, “particularly emerging technologies like artificial intelligence, but we also think deeply about how to mitigate the risk and harm of pervasive technology, that if we have tech everywhere, all the time, in every application, there may be some pitfalls that we need to be thinking about.” Strategies for Change Knight talked about planning to address the pitfalls she mentioned. “We have a lot of strategies around how tech can both be utilized by the social sector to create efficiencies,” she said, “to scale impact, to think differently about the way that we might solve some of the intractable problems that we've seen across decades. But we're also thinking about how we make sure that the problem set that comes from communities, the actual, tangible problems that people are trying to address.” Also, she noted, it’s important to think about where tech can and cannot be helpful. “We think a lot about what we call design, govern, fund, deploy,” she said. “These different spots in the life cycle of technology where we can integrate more of that tangible sort of human-centered lines of thinking and questioning that will hopefully lead us toward more technologies that actually do solve real problems.” Parli mentioned an important tool at Hai, an ethics and society review board that can help vet ideas. “We have large grant programs, and we distribute funding, and (with) all of the proposals that come in for research funding, the researchers have to submit a one-pager on what might be the implications of this research and this tech, if it were to become ubiquitous across society,” she said, adding that applicants also have to talk about risk mitigation. “We have an interdisciplinary panel of people, from computer scientists, from ethics, from healthcare, from all of these different disciplines, who review those statements.” Then, also, she explained, it’s important to utilize community oriented design principles. “We've seen, historically, a lot of tech where, of course, the people developing it have had great intentions, but they didn't have that interdisciplinary perspective to really understand the needs of society, and the broader public wasn't quite involved,” she said. “A lot of people can tell you a lot about their solution and how fantastic it is,” Knight added. “But when you ask them what problem that solution is needing to address, they don't have as crisp an answer. And that, to me, is a red flag.” A Reading AI Knight talked about funding Quill, an educational project. “They have tons of data on actual teacher feedback, and they use that in their own tool that helps students and teachers sort of assess their writing,” she said. “It's sort of a copilot, but not in the sense that it's prompting you and telling you what to do proactively, but more so that it's able to co-pilot when you need that assistance, and (give) help specifically designed with educator input, to help guide students in the way that a teacher or coach might, toward better writing outcomes.” Parli added some comments more broadly about funding various beneficial parties that will develop for the people. “(We have to keep) thinking about how cultural values are embedded into these AI systems, and thinking about, again, working with the humanists, who actually study culture, and then the communities across these different cultures to understand what is coming out of the models,” Working on the Edge Parli later mentioned the work of various teams and individual researchers doing fundamental work in this area. “There needs to be people who are willing to place bets on these ideas,” she said. “Historically, it has been the government and philanthropy. Philanthropy can’t fill the hole, but there are opportunities.” During the panel, the pair talked about talent, and partnerships, and public data sets, ending with an urge to vet projects and keep AI work on the right path. Clearly define the problem,” Parli said, “and if someone's pitching you, ask them what the problem that they're trying to solve is, before they start talking about their solution. I'm not letting anyone pitch me on yet another thing that turns out to be a chatbot with a shiny skin on it, that's the same ChatGPT that you could access in a browser, but they've made it non-profit friendly, and now want $10 million.” “Think critically about the technology, and what you're seeing, as far as what's coming out in the media and some of the large AI labs, and get a diverse view about what the research is actually saying,” Knight added. “Think about what the technology can actually do, and the opportunities for the technology, because there are a lot, but they're not guaranteed, and it's up to all of us to make sure that we build the technology in ways that we want it to be built and used.” All of this is helpful for serving the community with AI. Let’s see what happens as we move into 2026. Editorial StandardsReprints & Permissions

Guess You Like