Cisco has unveiled its first ever server built specifically for artificial intelligence (AI) and machine learning (ML) workloads.
The new Cisco UCS server is designed to speed up deep learning, a compute-intensive form of machine learning that uses neural networks and large data sets to train computers for complex tasks. Packed with powerful NVIDIA GPUs, it is built to accelerate available machine learning software stacks.
"Over the next few years, apps powered by artificial intelligence and machine learning will become mainstream in the enterprise. While this will solve many complex business issues, it will also create new challenges for IT," said Roland Acra, SVP and GM for Cisco's Data Center Business Group. "The addition to the Cisco UCS lineup will power AI initiatives across a wide range of industries. Our early-access customers in the financial sector are exploring ways to improve fraud detection and enhance algorithmic trading. Meanwhile in healthcare, they're interested in better insights and diagnostics, improving medical image classification, and speeding drug discovery and research.
Artificial intelligence (AI) and machine learning (ML) are opening up new ways for enterprises to solve complex problems. But they will also have a profound effect on the underlying infrastructure and processes of IT. According to Gartner, "only 4% of CIOs worldwide report that they have AI projects in production." That number will grow dramatically over the next few years. And when it does, IT will struggle to manage new workloads, new traffic patterns, and new relationships within their business.
With the addition of the Cisco UCS C480 ML, Cisco wants to offer a complete range of computing options designed for each stage of the AI and ML lifecycle, spanning data collection and analysis near the edge, to data preparation and training in the data center, to the real-time inference that is at the heart of AI.
Cisco is working with an ecosystem in its AI approach. It is embracing containers and multicloud computing models to make it easier to deploy open source software at scale, no matter where apps live. It is validating machine learning environments and software such as Anaconda, Kubeflow, and solutions from Cloudera and Hortonworks on the new server. UCS customers who use Kubeflow running on top of Kubernetes will find it easy to deploy AI workloads directly to Google Kubernetes Engine, taking advantage of both on-prem and cloud ML capabilities.