Niflheim

Introduction

Niflheim overview

Niflheim currently consists of the following hardware:

  • A total of 866 compute nodes.
  • The nodes contain a total of 11368 CPU cores.
  • Aggregate theoretical peak performance of more than 235 TeraFLOPS (TFLOPS).
  • Some nodes equipped with GPU units have a peak performance of more than 10 TeraFLOPS.

CPU architectures

Niflheim comprises several generations of hardware for a total of 1027 compute nodes:

  • 192 24-core nodes Huawei XH620 v3 with two 64-bit Intel Broadwell Xeon E5-2650 v4 12-core CPUs (a total of 4608 cores) running at 2.20 GHz.

    Peak speed: 845 GFLOPS/node, 162 TFLOPS in total.

    The RAM memory type is 2400 MHz DDR4 dual-rank memory, memory size is 256 GB (180 nodes) or 512 GB (12 nodes)

    Each server has a 240 GB local SSD hard disk. The network interconnect is 100 Gbit/s Intel OmniPath.

    Installed in December 2016, March 2017, November 2017.

  • 51 16-core nodes Dell PowerEdge C8220 with two 64-bit Intel Ivy_Bridge Xeon E5-2650 v2 8-core CPUs (a total of 816 cores) running at 2.60 GHz.

    Peak speed: 166 GFLOPS/node, 8.5 TFLOPS in total.

    The RAM memory type is 1833 MHz DDR3 dual-rank memory, memory size is 128 GB (31 nodes) or 256 GB (20 nodes)

    Each server has a 300 GB local SSD hard disk. The network interconnect is QDR Infiniband.

    Installed in May 2014, April 2015 and June 2015.

  • 114 16-core nodes HP SL230s_Gen8 with two 64-bit Intel Sandy_Bridge Xeon E5-2670 8-core CPUs (a total of 1824 cores) running at 2.60 GHz.

    Peak speed: 166 GFLOPS/node, 18.9 TFLOPS in total.

    The RAM memory type is 1600 MHz DDR3 dual-rank memory:

    • 80 nodes have 64 GB of memory.
    • 28 nodes have 128 GB of memory.
    • 6 nodes have 256 GB of memory.

    Graphical Processing Units (GPU) are installed in 2 nodes, each of which has 4 Nvidia Tesla_K20X GPU s. These 8 K20X GPU units have a total peak performance of 10.5 TFLOPS.

    Each server has a 300 GB local hard disk. The network interconnect is QDR Infiniband.

    Installed in August 2012 and November 2013.

  • 116 8-core nodes HP SL2x170z_G6 with two 64-bit Intel Nehalem Xeon X5550 quad-core CPUs (a total of 928 cores) running at 2.67 GHz.

    Peak speed: 85 GFLOPS/node, 9.9 TFLOPS in total.

    Each server has 24 GB of 1333 MHz DDR3 RAM memory (one 4 GB DIMM module per memory channel), and a 160 GB SATA disk. The network interconnect is dual-port Gigabit Ethernet.

    Installed in August 2010.

  • 420 8-core nodes HP DL160_G6 with two 64-bit Intel Nehalem Xeon X5570 quad-core CPUs (a total of 3360 cores) running at 2.93 GHz.

    Peak speed: 94 GFLOPS/node, 39.4 TFLOPS in total.

    Each server has 24 GB of 1333 MHz DDR3 RAM memory (one 4 GB DIMM module per memory channel), and a 160 GB SATA disk. The network interconnect is dual-port Gigabit Ethernet.

    Installed in July-September 2009.

RAM memory capacity:

  • The Intel Broadwell Xeon E5-2650 have 256 or 512 GB RAM, giving an average of 10.7 GB or 21.3 GB RAM per core.
  • The Intel Ivy Bridge Xeon E5-2650 nodes have 128 or 256 GB of RAM, giving an average of 8 or 16 GB RAM per core.
  • The Intel Sandy Bridge Xeon E5-2670 nodes have 64, 128 or 256 GB of RAM, giving an average of 4, 8 or 16 GB RAM per core.
  • The Intel Nehalem Xeon X5570 nodes have 24 GB of RAM, giving an average of 3 GB RAM per core.

File servers

Several Linux file servers are available for the departmental user groups. Each group is assigned a file-system on one of the existing file servers. Depending on disk requirements, group file-systems can be from 1 TB and up.

The file servers are standard Linux servers with large disk arrays, sharing the file-systems using NFS. We do not use any parallel file servers (for example, Lustre).

New file servers have been installed in 2016, giving significantly more disk space.

Niflheim: Hardware (last edited 2017-11-28 15:35:08 by OleHolmNielsen)