Swedish National Infrastructure for Computing

NSC, Linköping University


TETRALITH


The Tetralith installation will take place in two stages. The first stage will have a capacity that exceeds the current computing capacity of Triolith. NSC plan to have the first stage available from July 1, 2018. The second part is installed after Triolith is decommissioned and dismounted. NSC plan to have the entire Tetralith in operation by November. Existing centre storage will remain and be reconnected to Tetralith. Tetralith will be running a CentOS 7 version of the NSC Cluster Software Environment.

Tetralith will consist of 1892 servers. Each server has two Intel Xeon Gold 6130 processors providing 32 cores per server. 1832 of the servers is equipped with 96 GiB of primary memory and 60 servers with 384 GiB. All servers are interconnected with a 100 Gbit Intel Omni-Path network which is also used to connect the existing storage.

The part of Triolith available to SNIC projects was downzised from 1536 nodes to 960 nodes on April 1st, 2017. This is a result of the delay in funding a replacement system. By reducing the number of nodes the rest of the system will run until mid 2018.

etralith is running a CentOS 7 version of the NSC Cluster Software Environment. This means that most things will be very familiar to Triolith users.

You will still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications will still be selected using "module".

All Tetralith compute nodes have 32 CPU cores. There will eventually be 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node will have a local SSD disk where applications can store temporary files (approximately 200GiB per thin node, 900GiB per fat node).

All Tetralith nodes will be interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage. The Omni-Path network will work in a similar way to the FDR Infiniband network in Triolith (e.g still a fat-tree topology).

The hardware will be delivered by ClusterVision B.V.

The servers used are Intel HNS2600BPB compute nodes, hosted in the 2U Intel H2204XXLRE chassis and equipped with Intel Xeon Gold 6130 for a total of 32 CPU cores per compute node.

For more information regarding the systems, storage and available software on the systems, visit the NSC web pages and especially the user guides.