Copyright SiliconANGLE News

Since artificial intelligence systems are both valuable targets and potential risk amplifiers, implementing zero-trust AI is crucial in today’s digital landscape. Red Hat Inc. is pioneering zero-trust AI, turning trust into a continuous process that boosts safety, accountability and reliability in critical environments. By enforcing ongoing verification, strict access controls and compartmentalization, it protects sensitive data, preserves model integrity and mitigates the risk of major AI breaches, according to Anjali Telang (pictured, left), senior principal product manager of OpenShift Security and Identity at Red hat. “Zero trust in general means that you trust no one, you always verify, and then you base that verification on an identity, and then you trust the person,” Telang said. “With AI, we want to sort of bring in the same trust that we already have built into the system. We want to make sure that the users, the machine, all the trust that we have brought in with the best practices around that, translates to AI workloads, AI agents.” Telang and Roman Zhukov (right), principal security community architect, Open-Source Program Office, at Red Hat, spoke with theCUBE’s Savannah Peterson and Rob Strechay at the KubeCon + CloudNativeCon NA event, during an exclusive on-the-ground broadcast from theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the critical role of zero-trust AI and how Red Hat is pioneering its implementation. (* Disclosure below.) Zero-trust AI requires continuous innovation Zero-trust AI is a dynamic framework where trust is continuously verified and security boundaries strictly enforced. Developers must experiment with Kubernetes and emerging technologies to effectively implement continuous verification, compartmentalization and strict access controls, Telang pointed out. “Some of the things that we are hearing from people is we have onboarded our applications on Kubernetes,” she said. “Now, AI is a whole new thing that we want to tackle. We already have put in so much work in getting these technologies right. We don’t have to rebuild everything. There’s a lot of reuse that we can do. We can use the same principles, zero-trust principles, and then extend them to AI. With AI, there’s a lot of fear involved, but my response to that is just be curious. Try it out in a safe environment and then extend it.” In an era where data is the lifeblood of organizations, digital sovereignty has become a strategic imperative. Achieving it demands a multi-faceted approach—such as confidential computing—especially in the AI era, where data is shared globally, according to Zhukov. “We hear these so-called digital sovereignty concerns,” he said. “Everybody wants to make sure that the technology that they build and use, they can control them, and this notion expands to AI as well. That’s why technologies like confidential computing, for example, come into play. Confidential computing is all about securing data in use when you can protect your workloads while in use so nobody, including the cloud providers or administrators, can access your data because it’s encrypted.” Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the KubeCon + CloudNativeCon NA event: (* Disclosure: Red Hat sponsored this segment of theCUBE. Neither Red Hat nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.) Photo: SiliconANGLE