In order to compile your code for the desired compute node architecture login to the corresponding login node login-to-niflheim.


Gnu compilers are available by default and the corresponding openmpi packages are available as modules, see Parallelization for details.

Newer gnu compiler versions are available through modules:

module load libmpc/0.8-3.el6
module load gcc/4.8.3-`echo $FYS_PLATFORM | cut -d "-" -f 1`-1


List available intel compilers with:

module avail intel-compilers

and load the desired version, e.g. the recommended one with:

module load intel-compilers

See Installed_software for more about modules.

Each compiler set has the corresponding openmpi package installed, see Parallelization for details.

Intel Math Kernel Library (MKL)

The Intel Math Kernel Library contains numerous mathematical operations. Please see the MKL documentation.

Linking your code with MKL can be quite complex. Therefore Intel offers a Math Kernel Library Link Line Advisor to assist with the selection of proper linking commands.


NVIDIA CUDA Compiler Driver nvcc is available on the Niflheim login nodes Getting_Started, after loading the modules:

module load NVIDIA-Linux-x86_64
module load cuda

An example of nvcc usage would be:

nvcc -arch sm_35 -c -Xcompiler -fPIC

See Installed_software for more about modules.

Installation and configuration of NVIDIA/cuda

The information is this section is relevant only for system administration.

There are several documents describing installation and configuration of NVIDIA/cuda, see for example

In order to install NVIDIA/cuda on an el6 x86_64 compute nodes the following steps were performed:

  1. configure the node to boot in init 3 mode:

    sed -i 's/id:.:initdefault:/id:3:initdefault:/' /etc/inittab
  2. install packages (libraries are required for some of the NVIDIA_CUDA-5.5_Samples to compile):

    yum -y install kernel-devel wget make gcc-c++ freeglut-devel libXi-devel libXmu-devel mesa-libGLU-devel
  3. rebuild initramfs in order to get rid of the nouveau driver:

    mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
    dracut --omit-drivers nouveau /boot/initramfs-$(uname -r).img $(uname -r)

    Verify that nouveau is not present:

    lsmod | grep nouveau

    Note that after removal of nouveau from initramfs no blacklisting of nouveau (echo "blacklist nouveau" > /etc/modprobe.d/blacklist-nouveau.conf) is necessary. Note however that blacklisting of nouveau alone is not sufficient - there are other modules that load nouveau trigerred by /boot/initramfs. See

  4. install NVIDIA drivers:

    sh --silent

    Note that in order to make NVIDIA libraries packages in (e.g. available on the login nodes, which itself do not contain GPU devices, one can unpack the installer:

    sh --extract-only

    as non-root, copy the shared objects and includes into a desired location and create links to shared objects manually:

    for f in /home/opt/common/NVIDIA-Linux-x86_64-331.20/lib/*.so.*;
       bn=`echo $f | sed 's/.so/:/' | rev | cut -d":" -f 2 | rev`.so  # *.so
       ln -sv `basename $f` $prefix/lib/`basename $bn`.1  # *.so.* -> *.so.1
       ln -sv `basename $bn`.1 $prefix/lib/`basename $bn`  # *.so.1 -> *.so

    For the applications which use the NVIDIA libraries set:

    export LD_LIBRARY_PATH=/home/opt/common/NVIDIA-Linux-x86_64-331.20/lib:${LD_LIBRARY_PATH}
  5. due to the fact that X is not running (the node is init 3 mode) /dev/nvidia* devices need to be created at every boot. This is achieved by /etc/init.d/nvidia script available for example at

    chmod a+x /etc/init.d/nvidia
    chkconfig nvidia on
    service nvidia start

    Verify that nvidia module is loaded:

    lsmod | grep nvidia

    you will see something like:

    nvidia              10613805  0
    i2c_core               31276  1 nvidia

    Note that step 3. may be unnecessary. Without the removal of nouveau from initramfs, and after installation of NVIDIA drivers (step 4.) you will see:

    i2c_core               31276  4 nvidia,nouveau,drm_kms_helper,drm

    however this does not seem to influence of the GPU devices.

  6. install cuda (again this can be performed as non-root):

    sh -silent -toolkit -toolkitpath=$prefix -samples -samplespath=$prefix

    Set the PATH, LD_LIBRARY_PATH and INCLUDE variables accordingly.

  7. test the installation on a system containing a GPU device. Note that the make step requires the NVIDIA libraries to be installed under /usr/lib*. As recommended above one can install the NVIDIA libraries in the default location on the compute nodes, and in a custom location on the login nodes. Test the installation on the compute node with:

    cd $prefix/NVIDIA_CUDA-5.5_Samples
    cp -rp ../NVIDIA_CUDA-5.5_Samples ../NVIDIA_CUDA-5.5_Samples.orig


List available open64 compilers with:

module avail open64

and load the desired version, e.g. the recommended one with:

module load open64

Each compiler set has the corresponding openmpi package installed, see Parallelization for details.

Additional information about Pathscale compilers

After the bankruptcy of Pathscale, Aertia you are encouraged to use open64 compiler instead.

If you prefer to use pathscale still, the login node has a single-user license for the Pathscale EKOPATH compiler for AMD64 and EM64T. This was probably the best compiler on the market for AMD Opteron CPUs.

The Pathscale Fortran-90 compiler command is pathf90, and the other languages are pathf95 (Fortran 95), pathcc (C), pathCC (C++).

Pathscale license manager

Since we have only a single license the Pathscale compiler, the license manager daemon will allocate the use of the compiler to only one user at a time, with a time-out period of 15 minutes of non-usage.

If you are having trouble with PathScale subscription management while using the compilers, use the -subverbose option as you compile. The information from this option is useful for diagnosing problems. You will need to do an actual compile with the -subverbose option to see the output. For example, you might type:

pathcc -subverbose hello.c

ACML libraries

We have also installed the AMD ACML Math library. Please see the AMD ACML User Guide (pdf). You have to link to different ACML libraries according to which compiler your use, for example:

  • GNU gcc EL5:

    gcc -m64 -I/opt/acml/4.0.1/gfortran64/include/ *.o -L/opt/acml/4.0.1/gfortran64/lib/ -lacml -lgfortran
  • GNU gcc 4.3 EL5:

    gcc43 -m64 -I/opt/acml/4.3.0/gfortran4364/include/ *.o -L/opt/acml/4.3.0/gfortran4364/lib/ -lacml -lgfortran
  • GNU gcc 4.4 EL6:

    gcc -m64 -I/home/opt/common/acml-gfortran-64bit-4.4.0/include *.o -L/home/opt/common/acml-gfortran-64bit-4.4.0/lib/ -lacml -lgfortran
  • open64 EL6:

    opencc -I/home/opt/common/acml-open64-64bit-4.4.0/include *.o -L/home/opt/common/acml-open64-64bit-4.4.0/ -lacml

Note: problems with dgemm acml 4.1.0 and 4.2.0 have been reported, use these versions at your own risk!

Niflheim: Compilers (last edited 2014-09-28 15:16:46 by MarcinDulak)