Swedish National Infrastructure for Computing

NSC, Linköping University


TRIOLITH

Triolith will be replaced with a new system named Tetralith. Awarded Triolith allocations will be transferred to Tetralith and users migrated. Details will be announced later.

The Tetralith installation will take place in two stages. The first stage will have a capacity that exceeds the current computing capacity of Triolith. NSC plan to have the first stage available from July 1, 2018. The second part is installed after Triolith is decommissioned and dismounted. NSC plan to have the entire Tetralith in operation by November. Existing centre storage will remain and be reconnected to Tetralith. Tetralith will be running a CentOS 7 version of the NSC Cluster Software Environment.

Tetralith will consist of 1892 servers. Each server has two Intel Xeon Gold 6130 processors providing 32 cores per server. 1832 of the servers is equipped with 96 GiB of primary memory and 60 servers with 384 GiB. All servers are interconnected with a 100 Gbit Intel Omni-Path network which is also used to connect the existing storage.

The part of Triolith available to SNIC projects was downzised from 1536 nodes to 960 nodes on April 1st, 2017. This is a result of the delay in funding a replacement system. By reducing the number of nodes the rest of the system will run until mid 2018.

Triolith (triolith.nsc.liu.se) is a capability cluster with a total of 24320 cores and a peak performance of 428 Tflops/s. It is equipped with a high performance interconnect for parallel applications. The operating system is CentOS 6.x x86_64. Each of the 1520 HP SL230s compute servers is equipped with two Intel E5-2660 (2.2 GHz Sandybridge) processors with 8 cores each (i.e. 16 cores per compute server). 56 of the compute servers have 128 GiB memory each and the remaining 1464 have 32 GiB each. The fast interconnect is Infiniband from Mellanox (FDR IB, 56 Gb/s) in a 2:1 blocking configuration.

Hardware summary 

Triolith
Processor Intel® E5-2660 16 Core Processor (Sandybridge) 2.2 GHz
Interconnect Mellanox (FBR IB, 56 Gb/s) in a 2:1 blocking configuration
Node memory  32 GiB on 1464 of the compute servers, 128 GiB on 56 of the compute servers
Node local scratch disk 500 GiB  or 2 x 500 GiB per node

Software summary

Triolith

Operating System:

CentOS 6.x 64-bit Linux

Resource Manager:

SLURM 2

Scheduler:

SLURM 2

Compiler:

Intel compiler collection
icc and ifort

Math library:

Intel Math Kernel Library
Cluster Edition

MPI:

IntelMPI Messag-passing interface library

OpenMPI A High Performance Message Passing Library

Centre storage at NSC

The total disk space available to store files at NSC is approximately 2800 TiB. By default, each user have a ~20 GiB home directory with backup. By default, each project that has been allocated computing time on Triolith will have a directory under /proj (e.g /proj/snic2014-1-123, /proj/somename) where the project members can store their data associated with that project. The name of the directory is decided by the project Principal Investigator ("PI"). Default disk space per project for storage during a SNIC project's allocation period, referred to as Permanent in the SNIC application form, is 500 GiB. Disc space above the default limit can be granted following a well motivated request by e-mail to support@nsc.liu.se. Node local scratch disk in the hardware summary table above refers to disk space available for running batch jobs, referred to as Temporary in the application form, and is erased between batch jobs. For more information regarding Center Storage, visit the extensive description of Center Storage at NSC.

For more information regarding the systems, storage and available software on the systems, visit the NSC web pages and especially the user guides.