Cisco and NVIDIA aim to simplify AI infrastructure
Cisco is offering servers, Ethernet networking, and professional services along with NVIDIA software and GPUs to support enterprise AI.
Cisco Validated Designs (CVDs) offer proven configurations that “significantly simplify the process of deploying infrastructure for AI,” according to Cisco.
NVIDIA continues to have a favourable view of InfiniBand, but is also looking to diversify its options, according to one analyst.
You don’t need to be exceptionally intelligent to recognise that constructing infrastructure for AI requires careful consideration.
“AI is not just any task.” “It requires a significant amount of data and computational resources, and if you make mistakes, it can result in a significant waste of time for your data scientists and incur substantial costs for your enterprise,” stated Zeus Kerravala, the main analyst at ZK Research.
The sector has been in a hurry to separate network and compute elements, but this has made it challenging to bring together all the necessary components to ensure that hardware and software can handle the necessary workloads, according to Kerravala.
DriveNets updates Ethernet to link AI megaclusters in the cloud
Cisco and NVIDIA are collaborating to streamline such issues. The two companies announced a collaboration on Tuesday at the Cisco Live conference in Amsterdam to assist businesses in the simple implementation and administration of secure AI infrastructure. This collaboration combines NVIDIA AI Enterprise software, which facilitates the creation and deployment of advanced AI and generative AI workloads, with Cisco’s Ethernet networking solutions and servers equipped with NVIDIA GPUs. ClusterPower, a cloud service provider in Europe, is implementing the technology for data centre operations utilising AI and ML, as stated in the joint statement by Cisco and NVIDIA.
Companies may install the hardware and software in both the data centre and edge locations to place computation and GPU capabilities near the source of data generation, according to Jeremy Foster, senior vice president and general manager of Cisco computation.
The two companies are embedding NVIDIA’s latest Tensor Core GPUs in Cisco M7 generation UCS rack and blade servers, and offering jointly verified reference designs through Cisco verified Designs (CVDs), including CVDs for FlexPod and FlashStack for Generative AI inference. The CVDs, utilising technologies from partners such as Pure Storage, NetApp, and Red Hat, will simplify the process of building infrastructure for AI, as stated in a blog post by Cisco.
“Customers are aware that they are using a verified design that has been tested, and that Cisco can also provide assistance for that verified design,”” Foster added. “”We understand the functioning and assembly of all these components.”
But what about InfiniBand?
The collaboration with Cisco for Ethernet does not indicate that NVIDIA is losing confidence in its own Infiniband networking, according to Ron Westfall, research director at Futurum Group. That is a business worth $10 billion for NVIDIA, meeting the latency and bandwidth needs of important customers like Microsoft. However, NVIDIA is being cautious, as major cloud computing providers like AWS, Azure, Google Cloud, and Meta are creating their own AI processors, while industry-wide efforts like the Untra Ethernet Consortium are dedicated to handling AI workloads.
Cisco and NVID have other competitors, including as Arista, Juniper, HPE Aruba, Broadcom, VMware, Dell, and Extreme. According to Westfall, NVIDIA has rivals in the form of AMD Instinct MI300 and Intel GPU Max series processors, along with the growing presence of hyperscaler GPUs.
Stay Updated about the latest technological developments and reviews by following TechTalk, and connect with us on Twitter, Facebook, Google News, and Instagram. For our newest video content, subscribe to our YouTube channel.
Read More: OpenAI’s Strategy to Fight Fake Images: Digital Tag By Dall-E 3