AI-powered networking is pushing infrastructure to the forefront, turning it into the backbone for innovation rather than just the plumbing of IT. From faster connections in data centers to smarter, more reliable edge and cloud networks, enterprises are rethinking how to keep pace with the growing demands of artificial intelligence.
How companies are reimagining AI-powered networking was a major focus of theCUBE’s coverage The Networking for AI Summit event. AI may be fueled by software and GPUs, but it only scales with the robust networks now driving growth across the industry.
“All across that networking ecosystem, we’re seeing growth,” Zeus Kerravala of ZK Research, a division of Kerravala Consulting, told theCUBE during the event. “If you think of the typical hockey stick adoption, we’re just kind of exiting the blade, and we’re about to hit the sharp rise up.”
During the event, experts emphasized that AI-powered networking is emerging as the backbone for agentic AI adoption, scalable security and simplified operations. TheCUBE’s coverage featured interviews with leaders who are shaping the next era of AI-powered networking.
Here’s three key insights you may have missed from theCUBE’s coverage:
Insight #1: AI-powered networking is becoming the foundation for scale and innovation.
As AI grows, networks are shifting from basic connectivity to smart platforms that drive scale, security and innovation. For Cisco Systems Inc., data center networks anchored in Ethernet, automation and security show how AI-powered networking is becoming the fabric that makes AI workloads enterprise-ready.
A massive build-out of infrastructure for AI is taking place, and the network sits at the center because it sees everything. Infrastructure is becoming more distributed, shifting from training-based environments to an inferencing solution, according to Murali Gandluru (pictured), vice president of product management for data center networking at Cisco.
“Cisco’s approach to AI networking takes on whatever we have been doing from the beginning of the data center switching paradigm itself, which is silicon, differentiated silicon, systems, software, optics, the complete integrated approach,” he said during the event. “It’s more essential than ever because we are delivering AI infrastructure at scale, not just from the perspective of bandwidth, but also from the perspective of telemetry.”
Here’s theCUBE’s complete interview with Murali Gandluru:
Extreme Networks Inc., meanwhile, has been looking to extend the vision to the edge with a cloud-native approach that turns the entire network into an intelligent platform for innovation. The rapid pace of AI adoption is raising urgent questions about what it truly means to extend AI across the entire infrastructure, according to Dan DeBacker, senior vice president of product management at Extreme.
“There’s so much data,” he said during the event. “That data has not really been harvested and brought back in to create information that can help a company make decisions to simplify things, automate things and all those types of activities that AI is now able to drive across the infrastructure.”
Extreme emphasizes a holistic approach in which AI is integrated across the entire network, from the data center to the campus to the branch, rather than focusing on siloed AI for wireless or wide area networks alone. AI must be viewed from a full network perspective, not just a single area, according to DeBacker.
“You can’t just look in one little area,” he told theCUBE. “What about the WAN? What about the branch? What about the wired network? What about what happens in the data center? All of these types of things. The pervasiveness of AI is its strength, and so these edge networks become critical in gathering all of that information.”
Here’s theCUBE’s complete interview with Dan DeBacker:
Insight #2: AI networking demands precision and autonomous operations.
Meeting AI’s demands means replacing fragmented tools with precise full-stack networking and intelligent operations that move IT from reactive to autonomous. For Meter Inc., it means tackling AI-era networking by building a full-stack platform from the ground up.
AI-driven networking promises faster networks that can serve everyone from global enterprises to schools on tight budgets. But the challenges are significant, which is why companies have avoided a full-stack approach until now, according to Anil Varanasi, chief executive officer of Meter.
“As ludicrous as it sounds, in these last 40 years of networking, before Meter, nobody has built the entire stack from the ground up with hardware platforms that are tied together, [a] single firmware image, operating systems, data pipelines, [application programming interfaces] and applications for the entire stack,” Varanasi told theCUBE during the event.
Here’s theCUBE’s complete interview with Anil Varanasi:
Fabrix.ai Inc., meanwhile, is reimagining IT operations through AgentOps. AgentOps is a new paradigm that reduces friction, automates decision-making and reimagines IT operations for an era of intelligent autonomy, according to Shailesh Manjrekar, chief AI and marketing officer of Fabrix.ai.
“You’re dealing with time-sensitive data, you’re dealing with alerts, you’re dealing with incidents,” he told theCUBE. “Many of the off-the-shelf agentic frameworks don’t apply to this kind of paradigm. The platform we have is actually built ground up to cater to operational use cases.”
Here’s theCUBE’s complete interview with Shailesh Manjrekar, Rached Blili, distinguished engineer – office of the chief technology officer at Fabrix.ai and Kerravala:
Insight #3: Autonomous AI networks require trust, security and scale.
As enterprises race to harness AI, the network is evolving into a trusted, secure and scalable foundation. Meeting those demands requires efforts like those of Juniper Networks Inc. and Hewlett Packard Enterprise Co., which are uniting Ethernet innovation with rack-scale compute expertise with an aim to strengthen AI-powered networking at scale.
The two companies have sought to deliver pre-tested systems that create a stronger foundation for AI-ready networking. Both AI training and inferencing are widely distributed computing problems, according to Praful Lalchandani, vice president of product – data center platforms and AI solutions – at Juniper.
“Most likely for enterprises, they’re not going to be building large language models,” he said during the event. “Most likely, they’re going to be consumers of those models. My point is that training or inferencing is highly distributed, and that means the network needs to be highly performant.”
Performance is critical, but security must be built in from the start, since multi-tenancy in AI services introduces significant risk. AI differs from standard enterprise systems in an important way: The observability of the data itself, according to Jon Green, senior sales engineer at Juniper.
“You think about an AI model that’s been trained, that now I’m going to do inferencing through because of things like tokenization,” Green told theCUBE. “The data — the programming of this — is going to be a really large vector space. It’s going to be a bunch of numbers, and there’s no real way to know what those numbers actually mean.”
Enterprises are shifting to large-scale AI production deployments. That places new demands on the data center and especially the network fabric, according to Bharath Ramesh, head of product, AI factory solutions, at HPE.
‘What we’re seeing is when clients navigate this transition, they realize it’s a huge chasm between building and operating it in the cloud and building and operating an AI platform within your premise,” Ramesh said during the event.
Here’s theCUBE’s complete interview with Bharath Ramesh, Praful Lalchandani, and Jon Green:
As these infrastructure challenges come into focus, HPE is also looking ahead to building trust in AI-driven operations and advancing toward truly autonomous networks. Enterprises increasingly see AI as inevitable in networking, with familiar consumer technologies such as ChatGPT and self-driving cars accelerating trust and adoption, according to Bob Friday, chief AI officer at HPE Networking.
“If you talk to the average IT person when they walk in in the morning, it used to be a very reactive job,” he told theCUBE. “There’s always a new set of problems they’re walking into. I think that when you talk to customers who have actually transitioned to cloud AI operations, you’ll hear that they don’t face the same problems every day.”
Here’s theCUBE’s complete interview with Bob Friday:
To watch more of theCUBE’s coverage of the The Networking for AI Summit event, here’s our complete event video playlist:
Photo: SiliconANGLE