By delivering the structure, tools and workflows needed to systematically develop, deploy and scale artificial intelligence, the Dell AI Factory creates a complete end-to-end ecosystem.
Since most companies struggle to piece together servers, storage, networking and software for AI, the Dell AI Factory removes that complexity with pre-integrated infrastructure — spanning developer workstations, high-performance training servers and scalable storage for massive datasets, according to Mary Kiernan (pictured), director of gen AI global consulting at Dell Technologies Inc.
“An AI factory for a [Proof of Concept] is a very self-contained thing,” Kiernan said. “When you start getting into what does this need to look like in order to provide a cloud-like experience on my data center floor, then it becomes observability tools, it becomes orchestration tools, it becomes security processes, some of which is tooling and some of which is governance. When you think of all of these things together, it can be a phenomenal amount of choice.”
Kiernan spoke with theCUBE’s Dave Vellante at theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how the Dell AI Factory is driving innovation by easing AI adoption for enterprises.
Dell AI Factory: Streamlining and scaling enterprise AI
Dell AI Factory is important because it democratizes enterprise AI by eliminating complexity, integration hurdles and high costs — while ensuring security, flexibility and scalability. Its reference architectures, validated designs and AI blueprints cut down trial-and-error, accelerating adoption and value creation, according to Kiernan.
“Some of our customers are at a maturity level right now where they come to us already kind of knowing some of the choices they want in those tools, in which case it’s a reverse engineering exercise,” she said. “In other cases, we do try to talk to the customers about what it is that they’re trying to achieve. The use cases, the outcomes, the business outcomes feed into and help us define what the rest of those tools might look like.”
AI initiatives demand substantial investment in data, infrastructure and talent. A proof of concept enables organizations to validate whether an AI solution can effectively address a specific business challenge before committing major resources. Enterprises increasingly start by defining the business use cases they want to enable and then determine how to integrate their data into models that power those applications, Kiernan pointed out.
“The reason most customers go POC is because there is an immediate sort of return on the investment,” she said. “They’re able to see a use case. They’re able to see an application, they’re able to see something that speaks to them. When they’re able to see that use case, they’re able to generate more internal interest, and that internal interest is what sparks the data conversation.”
Enterprises are increasingly rethinking AI deployment — shifting workloads on-premises, adopting hybrid models or repatriating from the cloud. This trend is majorly driven by security, cost, performance, compliance and control reasons, according to Kiernan.
“In some cases it can be either,” she said. “Some of our customers go to POC first. Other customers, they’ve made a significant amount of headway in some of the cloud providers or in some of the CSPs or their R&D organizations, their data science organizations. They have been doing machine learning for a while and now they’re building agents. Now, because of security or cost reasons or IP, they want to start bringing that back on-prem. It really could be either.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event:
Photo: SiliconANGLE