As industries continue to push the boundaries of AI, the UCS C885A is engineered to deliver the raw computational power needed to support large-scale AI model training, fine-tuning, and inferencing. Built on the NVIDIA HGX architecture, this server is poised to become a key AI platform for sectors ranging from healthcare to financial services.

As AI technologies continue to advance, infrastructure needs are becoming more intricate. Industry forecasts predict significant growth in demand for AI servers, with spending expected to rise substantially across private clouds, edge computing, and on-premises deployments. By 2028, AI server investments are anticipated to see strong expansion, driven by double-digit annual growth rates. The Cisco UCS C885A M8 is poised to play a key role in this surge, offering exceptional performance, scalability, and flexibility for AI workloads.

The Cisco UCS C885A M8
The Cisco UCS C885A M8

Key hardware features and specifications

The Cisco UCS C885A M8 is built from the ground up to support the intense computational needs of modern AI applications. Key specifications include:

  • Form Factor: An 8U, 19" EIA rack unit, providing the ideal balance of size and scalability for data centers.
  • GPUs: Supports up to 8 NVIDIA H100 or H200 Tensor Core GPUs (HGX 8-GPU), designed to handle extensive AI model training, inferencing, and real-time data processing at scale.
  • CPUs: Powered by dual AMD Genoa (400W) or Turin (500W) processors, offering up to 96 cores and high clock speeds of up to 3.7 GHz.
  • Memory: Equipped with 24 DDR5 RDIMM slots supporting memory speeds up to 6000 MT/s, providing ample capacity for memory-intensive applications.
  • Storage: The server offers flexibility with 1 PCIe M.2 NVMe boot device, up to 16 PCIe5 x4 2.5" U.2 NVMe SSDs for data caching, and optional SAS/SATA storage with RAID card support.
  • Network: The server comes with up to 8 PCIe5 x16 HHHL (East-West NVIDIA BlueField-3 SuperNICs) and 5 PCIe5 x16 FHHL (North-South NVIDIA BlueField-3 DPU), delivering optimal network bandwidth and low latency.
  • Cooling & Power: Designed for reliability and high availability, the server features 12 hot-swappable fans for system cooling and redundant power supplies (up to 6x 54V 3kW), supporting N+1 redundancy.

Optimized for AI at scale use cases

The UCS C885A has been specifically engineered to handle AI workloads at scale, making it ideal for the full lifecycle of AI use cases for a wide range of data-intensive industries, such as healthcare & life sciences, financial services, and manufacturing & automotive, and service providers. The massive amounts of data that these organizations generate are essential for training and refining AI models, driving innovation, and improving decision-making. Here's how the UCS C885A can help:

Training large AI models: The UCS C885A accelerates the training of large AI models, which often require massive computational resources to process billions of parameters efficiently. It can support high-performance GPUs like NVIDIA H100 or H200 GPUs in an HGX 8-GPU configuration.

Model fine-tuning: Fine-tuning models based on foundation AI models is critical for customizing AI solutions for specific industries. The server's GPU capabilities make this process faster and more effective.

Inferencing at scale: With 8 GPUs and state-of-the-art processing power, the UCS C885A can handle large-scale AI inferencing tasks, helping ensure that AI applications can provide real-time insights with minimal latency.

To deliver optimal performance, the UCS C885A is equipped with advanced networking, power, and cooling systems featuring high-bandwidth networking that supports East-West and North-South data traffic. Each server includes NVIDIA NICs or SuperNICs to accelerate AI networking performance, as well as NVIDIA BlueField-3 DPUs to accelerate GPU access to data and enable robust, zero-trust security.

The UCS C885A also offers stable power delivery with up to six 3kW power supplies in an N+1 redundancy setup, maximizing uptime and reliability. Its efficient cooling system, including 12 system fans and 4 SSD fans, maintains optimal temperatures to help components perform at their best under heavy loads.

The UCS C885A supports NVIDIA AI Enterprise, an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines AI development and deployment. This means faster time to value, consistent performance and reduced risk for AI projects.

For management and monitoring, the UCS C885A supports Cisco Intersight with a roadmap of features including inventory management, power operations, KVM management, and firmware management. These capabilities will allow users to track hardware components, control power states, access the server remotely, and ensure the system remains up to date.

Competitive advantage in the market

The UCS C885A M8 stands out in the competitive dense GPU server market in several ways. It is Cisco's first entry into a new dedicated AI server portfolio and its first eight-way accelerated computing system built on the NVIDIA HGX platform.

In addition, the UCS C885A M8 incorporates high-performance AMD CPUs, including the latest 4th and 5th Generation EPYCâ„¢ processors, which provide a balance of compute and memory performance crucial for handling demanding AI tasks. The UCS C885A M8 is also designed as a comprehensive AI ecosystem, spanning hardware to software integration with tools like MLPerf, supporting the complete AI lifecycle, from initial model training to final deployment.

Call to action

The Cisco UCS C885A M8 is a game-changing dense GPU server built to handle the most demanding AI workloads. Its combination of powerful GPUs, high-performance CPUs, large memory capacity and advanced networking capabilities makes it the ideal solution for organizations looking to scale their AI infrastructure. With industries increasingly relying on AI to drive innovation, the UCS C885A is set to be a cornerstone of AI-driven transformation across sectors.

For more information on integrating the Cisco UCS C885A M8 into your AI infrastructure or to schedule a consultation, visit our AI Proving Ground or speak to your WWT account team.

Technologies