The compute nodes are connected using high-speed InfiniBand FDR (56 Gbit/s) equipment from Mellanox. To reduce cost while maintaining good inter-node speed, the InfiniBand switches are connected using a 3D torus (a 3x3x4 3D torus) as shown in the figure below.

 Each box in the figure, e.g., s51 (1,2,1), corresponds to an InfiniBand switch.



  • Compute nodes: 584
  • CPU Cores: 14,016
  • Performance:
    • Theoretical max: 766.6 TFlop/s (entire system)
    • Theoretical max: 560.6 TFlop/s (CPUs only)
    • Linpack performance: 462.4 TFlop/s (CPUs only)
  • Memory: 64.5 TB RAM
  • Interconnect: InfiniBand FDR (56 Gbit/s)
  • Operating system: Linux (Centos 7)


To cover as many users’ requirements as possible our cluster is built on three kinds of nodes. All nodes have the same base configuration:

Subscribe to RSS - network