Home » Markets » AI

Computex 2024: CXL memory expansion server to advance AI and HPC

Servers based on the 4th Gen AMD EPYC processors have been introduced at Computex 2024 by MSI. The servers also feature CXL (Compute Express Link) technology and DC-MHS architecture, for scalable cloud applications.

The S2301 CXL memory expansion server is intended for data-intensive workloads and in-memory databases. The 2U servers uses dual-socket AMD EPYC CPUs and features CXL 2.0 standards, 24 DDR5 DIMM slots and two PCIe 5.0 x16 slots. There are eight E3.S 2T drive bays for CXL memory and eight E3.S 1T bays for NVMe storage for memory expansion up to 8-TB.

The company also announced the 4U 4-GPU G4101, GPU-accelerated, rack mounted workstation for AI. It supports a single AMD EPYC 9004 processor with liquid cooling, 12 DDR5 RDIMM slots, four PCIe 5.0 x16 slots for triple-slot GPUs, and 12 2.5-inch U.2 NVMe/SATA drive bays. The design includes airflow spacing and liquid closed-loop cooling for thermal management.

Another introduction is the CX271-S4056. This 2U, all-flash servers has two double-wide slots for GPU cards and features a single-socket AMD EPYC 9004 processor, two OCP NIC 3.0 mezzanine slots, and up to 24 2.5-inch PCIe 5.0 U.2 NVMe drive bays to support applications, such as AI inferencing.



There are also two multi-node servers for data centres, the CD360-S4051-X2 and CD260-S4051-X4. The CD360-S4051-X2 is a 3U, two-node server platform for high TDP server processors. Each node includes one AMD EPYC processor, 12 DDR5 RDIMM slots, two optional double-wide PCIe 5.0 x16 slots for GPU cards, one OCP NIC 3.0 mezzanine slot, two NVMe M.2 ports, and three 2.5-inch U.2 NVMe drive bays.

The CD260-S4051-X4 is a 2U, four-node server platform with the same specifications except without the PCIe 5.0 slots.

 


Leave a Reply

Your email address will not be published. Required fields are marked *

*