Copyright SiliconANGLE News

The cloud-native ecosystem is shifting fast — and developers are at the center of that transformation. From Kubernetes to observability stacks, open-source projects are evolving from niche innovations to enterprise-scale infrastructure standards driving AI, security and automation. A mid-year snapshot from the Cloud Native Computing Foundation highlights just how quickly that evolution is happening. OpenTelemetry has now surged to become the second-highest velocity project in the CNCF, just behind Kubernetes itself — a sign that visibility, security and governance are defining the next phase of cloud-native maturity. “In the cloud-native era embraced at KubeCon + CloudNativeCon NA 2025, organizations face a fundamental pivot,” said Paul Nashawaty, principal analyst with theCUBE Research. “Their next release cadence isn’t just about ‘faster’; it’s about ‘secure from day zero,’ which means integrated security (DevSecOps), native container monitoring (observability), and automated compliance governance layered into every image, cluster and service.” TheCUBE, SiliconANGLE Media’s livestreaming studio, will be covering the latest news and announcements live from KubeCon + CloudNativeCon NA, from November 10-13. Tune in for on-site reporting and exclusive interviews with key cloud-native executives from CNCF, Red Hat Inc. Google LLC and AI leaders as theCUBE explores how cloud-native technologies are powering the next wave of AI, platform engineering and open-source community growth. TheCUBE will highlight how enterprises, developers and vendors are advancing Kubernetes as the foundation for AI workloads, scaling complex infrastructure and sustaining the open-source ecosystem. (* Disclosure below.) Red Hat releases AI 3 for cloud-native world The wave of open-source innovation sweeping through the cloud-native ecosystem is about to play out on the ground at KubeCon, where major contributors are rolling out fresh updates and frameworks to power the next wave of intelligent infrastructure. One of the biggest comes from Red Hat. Last week, Red Hat announced a significant evolution of its enterprise AI platform with the release of AI 3, which is designed to manage AI workloads across cloud, edge and datacenter environments. Its hybrid architecture allows enterprises to run next-generation AI anywhere — accelerating inferencing and giving teams greater freedom in model selection. With the rise of agent-based systems, Red Hat tailored AI 3 for supporting autonomous AI by adding the Llama Stack application programming interface layer. “We give you the flexibility to choose whatever framework you prefer,” said Joe Fernandes, vice president and general manager of Red Hat’s AI business unit. “We want to be the platform that can support all agents regardless of how they’re built.” A key feature of Red Hat’s latest AI release is the use of the llm-d open-source project to deliver vLLM and Kubernetes for AI inferencing at scale. The release bundled enhancements across Red Hat’s Open Shift AI, Enterprise Linux AI and AI Inference server with the goal of centralizing the inference process into one platform. Red Hat’s approach is to create an infrastructure that is stable, open and built to scale, according to Stu Miniman, senior director of market insights at Red Hat. “Open source unlocks the world’s potential,” Miniman said during an interview with theCUBE. “We’re not building all the applications. We’re giving you the tools and the capabilities and freeing up your people to be able to take advantage of that more than anything else.” Decade milestone for GKE A key milestone that will undoubtedly be a focus during this year’s KubeCon gathering is the tenth anniversary of Google Kubernetes Engine. When the beta release of GKE was launched in June 2015, the tool rapidly became a cornerstone of Google’s overall cloud strategy. GKE has evolved significantly over the years, including an upgrade in capacity to support 65,000 node clusters. The maturity of GKE, coupled with platforms such Cloud Run for running code on scalable infrastructure, has allowed organizations to build applications with minimal operational overhead. This has proved significant in the AI era. Kubernetes has evolved from a general-purpose container orchestration tool to become a foundational layer for inferencing at scale, and GKE is designed to handle AI’s complex and bursty workloads. Tools such as GKE Autopilot and the Inference Gateway are also supporting enterprises seeking to manage the demands of enterprise AI adoption, a natural evolution for the tool Google created a decade ago. “The constant improvements made by GKE over the past 10 years profoundly changed the way we design, deploy and evolve our services,” said Leon Bouwmeester, director of engineering and head of Hue Platform at Signify. “We spend less time on infrastructure management and can focus our efforts on what really matters: the quality of the user experience and the speed of innovation.” Innovation in infrastructure modernization In addition to Red Hat and Google, a number of companies will be showcasing new cloud-native tools and infrastructure modernization for AI deployment. One of Google Cloud’s partners, Elasticsearch B.V., has embedded its vector database into Google’s Vertex AI platform. The goal is to enable fast, more efficient AI-driven search and analytics experiences. Vast Data Inc., with the stated aim to transform itself into the “operating system” for AI, announced SyncEngine in August, which provides an onboarding solution for unstructured data to build AI pipelines. It is the latest addition to the Vast AI OS platform for combining core distributed computing services into a unified data layer for hybrid environments. This week, the composable cloud infrastructure provider Vultr announced new plans for optimizing spend and price/performance. The latest release will be part of the mix next month at KubeCon, where fresh new looks at open-source tools and the modern infrastructure will be on full display. “I’m really looking forward to KubeCon + CloudNativeCon North America in Atlanta,” said Rob Strechay, principal analyst at theCUBE Research. “The convergence of AI workloads on Kubernetes, platform engineering innovation, new frontiers in observability, stronger cloud native security models, and the rise of edge and hybrid deployments is shaping the future of modern infrastructure. This event is where those breakthroughs come to life, and I can’t wait to be part of the conversation.” TheCUBE event livestream Don’t miss theCUBE’s coverage of KubeCon + CloudNativeCon NA from Nov. 10-13. Plus, you can watch theCUBE’s event coverage on-demand after the live event. How to watch theCUBE interviews We offer you various ways to watch theCUBE’s coverage of KubeCon + CloudNativeCon NA, including theCUBE’s dedicated website and YouTube channel. You can also get all the coverage from this year’s events on SiliconANGLE. TheCUBE podcasts SiliconANGLE’s “theCUBE Pod” is available on Apple Podcasts, Spotify and YouTube, which you can enjoy while on the go. During each podcast, SiliconANGLE’s John Furrier and Dave Vellante unpack the biggest trends in enterprise tech — from AI and cloud to regulation and workplace culture — with exclusive context and analysis. SiliconANGLE also produces our weekly “Breaking Analysis” program, where Dave Vellante examines the top stories in enterprise tech, combining insights from theCUBE with spending data from Enterprise Technology Research, available on Apple Podcasts, Spotify and YouTube. Guests During KubeCon + CloudNativeCon NA, theCUBE analysts will talk with industry experts from Red Hat, Google, Dynatrace LLC, Constant Company LLC (d/b/a Vultr), Honeycomb Inc., Backblaze Inc., Spacelift Inc., Union AI Inc., Vast Data and Elasticsearch, among others, about the latest advances in cloud-native architecture, AI infrastructure modernization and open-source innovation. (* Disclosure: TheCUBE is a paid media partner for the KubeCon + CloudNativeCon NA event. Neither Red Hat Inc., the headline sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.) Image: SiliconANGLE