Differences between revisions 51 and 52
Revision 51 as of 2019-04-12 11:19:20
Size: 6721
Comment:
Revision 52 as of 2019-04-16 09:42:17
Size: 6721
Comment:
Deletions are marked like this. Additions are marked like this.
Line 58: Line 58:
  Installed in **March 2019**.   Installed in **April 2019**.

Introduction

Niflheim overview

Niflheim currently consists of the following hardware:

  • A total of 751 compute nodes.
  • The nodes contain a total of 16.496 CPU cores.
  • Aggregate theoretical peak performance of more than 797 TeraFLOPS (TFLOPS).
  • Some nodes equipped with Nvidia GPU units have a peak performance of more than 10 TeraFLOPS.

CPU architectures

Niflheim comprises several generations of hardware for a total of 751 compute nodes:

  • 192 40-core nodes Dell C6420 and R640 with two Intel Skylake Xeon_Gold_6148 20-core CPUs (a total of 7680 cores) running at 2.40 GHz Base frequency (up to 3.70 GHz in Turbo_mode).

    Peak speed: 3072 GFLOPS/node, 590 TFLOPS in total.

    The RAM memory type is 2666 MHz DDR4 dual-rank memory:

    • 180 C6420 nodes have 384 GB of memory (9.6 GB/core).
    • 12 R640 nodes have 768 GB of memory (19.2 GB/core).

    Each server has a 240 GB local SSD hard disk. The network interconnect is 100 Gbit/s Intel OmniPath.

    Installed in April 2019.

  • 192 24-core nodes Huawei XH620 v3 with two Intel Broadwell Xeon_E5-2650_v4 12-core CPUs (a total of 4608 cores) running at 2.20 GHz (up to 2.90 GHz in Turbo_mode).

    Peak speed: 845 GFLOPS/node, 162 TFLOPS in total.

    The RAM memory type is 2400 MHz DDR4 dual-rank memory:

    • 180 nodes have 256 GB of memory (10.7 GB/core)
    • 12 nodes have 512 GB of memory (21.3 GB/core)

    Each server has a 240 GB local SSD hard disk. The network interconnect is 100 Gbit/s Intel OmniPath.

    Installed in December 2016, March 2017, November 2017.

  • 48 16-core nodes Dell PowerEdge C8220 with two Intel Ivy_Bridge Xeon_E5-2650_v2 8-core CPUs (a total of 816 cores) running at 2.60 GHz (up to 3.40 GHz in Turbo_mode).

    Peak speed: 166 GFLOPS/node, 8 TFLOPS in total.

    The RAM memory type is 1833 MHz DDR3 dual-rank memory:

    • 28 nodes have 128 GB of memory (8 GB/core).
    • 20 nodes have 256 GB of memory (16 GB/core).

    Each server has a 300 GB local SSD hard disk. The network interconnect is QDR Infiniband.

    Installed in May 2014, April 2015 and June 2015.

  • 111 16-core nodes HP SL230s_Gen8 with two Intel Sandy_Bridge Xeon_E5-2670 8-core CPUs (a total of 1824 cores) running at 2.60 GHz (up to 3.30 GHz in Turbo_mode).

    Peak speed: 166 GFLOPS/node, 18 TFLOPS in total.

    The RAM memory type is 1600 MHz DDR3 dual-rank memory:

    • 77 nodes have 64 GB of memory (4 GB/core).
    • 28 nodes have 128 GB of memory (8 GB/core).
    • 6 nodes have 256 GB of memory (16 GB/core).

    Graphical Processing Units (GPU) are installed in 2 nodes, each of which has 4 Nvidia Tesla_K20X GPU s. These 8 K20X GPU units have a total peak performance of 10.5 TFLOPS.

    Each server has a 300 GB local hard disk. The network interconnect is QDR Infiniband.

    Installed in August 2012 and November 2013.

  • 68 8-core nodes HP SL2x170z_G6 with two Intel Nehalem Xeon_X5550 quad-core CPUs (a total of 928 cores) running at 2.67 GHz.

    Peak speed: 85 GFLOPS/node, 6 TFLOPS in total.

    The RAM memory type is 1333 MHz DDR3 dual-rank memory:

    • 22 nodes have 48 GB of memory (6 GB/core).
    • 46 nodes have 24 GB of memory (3 GB/core).

    Each server has a 160 GB SATA disk. The network interconnect is dual-port Gigabit Ethernet.

    Installed in August 2010.

  • 140 8-core nodes HP DL160_G6 with two Intel Nehalem Xeon_X5570 quad-core CPUs (a total of 3360 cores) running at 2.93 GHz.

    Peak speed: 94 GFLOPS/node, 13 TFLOPS in total.

    The RAM memory type is 1333 MHz DDR3 RAM memory:

    • All nodes have 24 GB of memory (3 GB/core).

    and a 160 GB SATA disk. The network interconnect is dual-port Gigabit Ethernet.

    Installed in July-September 2009.

File servers

Several Linux file servers are available for the departmental user groups. Each group is assigned a file-system on one of the existing file servers. Depending on disk requirements, group file-systems can be from 1 TB and up.

The file servers are standard Linux servers with large disk arrays, sharing the file-systems using NFS. We do not use any parallel file servers (for example, Lustre).

New file servers have been installed in 2016, giving significantly more disk space.

Niflheim: Hardware (last edited 2019-04-16 09:42:17 by OleHolmNielsen)