Copyright syracuse.com

Lucas Roberts is an independent AI researcher based in New York City. Believe it or not, modern software powering everything from Mars rovers to your morning Zoom call shares a surprising commonality: It’s all built by volunteers. That’s because at its heart, open-source innovation, which is essential to advances in critical technologies like artificial intelligence (AI), is about collaboration — but on a global scale. Let me explain. While every major technological breakthrough requires collaboration, the open-source ecosystem has created a powerful symbiotic relationship among developers and volunteers that accelerates AI innovation far beyond what any single organization could achieve alone. When a developer releases an open-source model, they’re tapping into a global network of brilliant minds who will push the technology further, often in their free time. And because the underlying code and data are freely available for anyone to use, modify and distribute, developers can build upon existing work, rather than starting from scratch. The result is an entirely new innovation economy powered as much by volunteers as big tech companies. For researchers like me, contributing to these models is how we advance the state of the art. I can test new optimization techniques on advanced open-source models that I could never afford to develop from scratch. A graduate student can experiment with novel safety mechanisms using advanced open-sourced frameworks that have served as foundations for major breakthroughs in computer vision and language processing. This creates a virtuous cycle: Volunteers contribute improvements that make base models better, while developers provide computational resources that individual contributors could never match. The developers get distributed R&D from the world’s top talent; the volunteer community gets access to cutting-edge technology that democratizes innovation. The volunteer community also serves as AI’s immune system, conducting comprehensive stress-testing. When security vulnerabilities in software emerge, volunteer developers identify and fix problems faster than any commercial team could manage. Thousands of us choose to spend our nights and weekends on this work. We do it because open-source AI democratizes access to technology that would otherwise be locked behind corporate paywalls, and because open-source AI innovation can help rapidly build and deploy custom software in response to disasters, like floods and hurricanes, for example, making every late night worth it. Unfortunately, New York’s recently passed RAISE Act threatens my and other volunteers’ access to the collaborative open AI ecosystem that we helped build and rely on. First, the act holds “large developers” responsible for virtually any potentially harmful use of their AI model, even by bad actors completely outside their control. This regulatory framework is fundamentally incompatible with how the technology works, especially open source, and would force companies who invest the most in cutting-edge models to restrict access and sharing. Of course, the act’s arbitrary financial definition of a “large company” has no correlation with actual model capabilities or dangers. In fact, some foreign models continue to outperform many U.S. models at a fraction of the cost that a “large company” is defined by the RAISE Act’s text. The Act goes even further to hold companies responsible for potentially harmful uses of derivative models created by others using their releases, including any net new models costing as little as $5 million, which is a small fraction of what it costs to create today’s leading AI models. This means a covered developer could be responsible not only for how others use their own AI models, but also for how a downstream developer, with modest resources, changes the original developer’s open-source model, even without the original developer’s direct involvement. No rational company would accept that risk, and open AI models would be a thing of the past. Without the open-source immune system, these models will be less safe. No rational open-source developer would accept this risk, either. Without access to cutting-edge base models, the entire innovation cycle breaks down. Researchers lose the foundation they need to develop new techniques. Developers lose their distributed R&D network. And New York’s innovation economy slows to what individual organizations can achieve in isolation. The future of AI innovation depends on preserving the structural collaboration between developers and volunteers, which is exactly what the RAISE Act aims to break down. If Gov. Kathy Hochul wants New York to become a leading AI hub, she will protect’ our access to open-source AI models and veto the misguided RAISE Act. New York’s relevancy in the global AI race is at stake.