RISC-V architecture CPU enters supercomputing with amazing performance
RISC-V architecture CPU enters supercomputing with amazing performance and has become the strong rival of x86/ARM CPUs!
A team of European university students has assembled the first RISC-V supercomputer capable of balancing power consumption and performance.
More importantly, it demonstrates the huge potential of RISC-V in high-performance computing, providing an opportunity for Europe to wean itself off of American chip technology.
The “Monte Cimone” cluster won’t be used to handle large scale weather simulations etc anytime soon, as it’s just an experimental machine.
Built by people at the University of Bologna and CINECA, Italy’s largest supercomputing center, the device is a six-node cluster designed to showcase various elements of HPC performance in addition to floating-point capabilities.
It uses power modules from SiFive’s Freedom U740 system-on-chip RISC-V, a 2020 SoC with five 64-bit RISC-V CPU cores – four U7 application cores and an S7 system management core – 2MB secondary Cache, Gigabit Ethernet, and various peripherals and hardware controllers.
It can run at about 1.4GHz, and here are the components and speeds of the Monte Cimone:
- Six two-board servers with a form factor of 4.44 cm (1U) high, 42.5 cm wide, and 40 cm deep. Each board follows the industry standard Mini-ITX form factor (170mm per 170mm);
- Each motherboard features a SiFive Freedom U740 SoC and 16GB of 64-bit DDR memory running at 1866s MT/s, as well as a PCIe Gen 3 x8 bus running at 7.8 GB/s, a Gigabit Ethernet port, and USB 3.2 Gen 1 interface;
- Each node has an M.2 M-key expansion slot occupied by a 1TB NVME 2280 SSD used by the operating system. A microSD card is inserted on each board for UEFI boot;
- Two 250 W power supplies are integrated inside each node to support hardware and future PCIe accelerators and expansion boards.
The Freedom SoC board is essentially SiFive’s HiFive Unmatched board.
Two of the six compute nodes are equipped with Infiniband Host Channel Adapters (HCAs), as used by most supercomputers.
The goal is to deploy 56GB/s Infiniband to allow RDMA to achieve I/O performance.
This is ambitious for a young architecture, and not without some hiccups.
“The vendor currently only supports PCIe Gen 3 lanes,” the cluster team wrote. “The first experimental results show that the kernel is able to recognize the device driver and mount a kernel module to manage the Mellanox OFED stack.
We were unable to use all of the RDMA capabilities of HCA due to unidentified software stack and kernel driver incompatibilities.
Nonetheless, we were able to successfully run IB ping tests between two boards and between one board and one HPC server, showing that full Infiniband support is possible.
“The HPC software stack turned out to be easier than one might think.” We ported all the essential services needed to run HPC workloads in production on Monte Cimone, namely NFS, LDAP, and the SLURM job scheduler.
Porting all necessary software packages to RISC-V is relatively straightforward.
This cluster will eventually be the one that will pave the way for further testing of the RISC-V platform itself and its ability to play nicely with other architectures, which is an important element as we are unlikely to see exascale for at least the next few years. RISC-V system.
Now, even Intel is eyeing the future of RISC-V.