1   Download and Installation

1.2   Installation

The next section describes how the fortran program, pseudopotentials and python interface can be installed.

If you have a installation using the old CamposASE, it is enough just installing the new python interface, see dacapo python interface.

1.2.1   Dacapo fortran program and pseudopotentials

This section describes how the fortran program and pseudopotential are installed. The current version of the fortran program is 2.7.7.

1.2.2   From RPM

A binary RPM (tested and builded using Pentium 4) can be used to install dacapo on a Pentium 4 system.

This will install pseudopotentials in /usr/share/dacapo/psp and binary executable in /usr/bin.

1.2.3   From Tarfile

A Dacapo tar file containing fortran source code and pseudopotentials can be used to install dacapo on a non rpm system.

Use gunzip Dacapo-2.7.7.tar.gz.

1.2.4   Dacapo binaries for different platforms

The page dacapo binaries list dacapo binaries for different platforms.

1.2.5   Compling the fortran source code

Compile the Dacapo source code by:

cd src
gmake <arch> [MP=mpi]

where <arch> is one of:

  • sun_ss10

    Sun sparcstation 10/20)

  • ibm_power3

    RS/6000 power3 node

  • ibm_power2_wide

    RS/6000 wide power2 node

  • ibm_power2_thin

    RS/6000 thin/thin2 power2 node

  • sgi

    Silicon Graphics n32 ABI

  • alpha

    Digital Alpha

  • pglinux

    Portland Group (PGI) pgf90 compiler on Linux

  • intellinux

    Intel ifort Fortran compiler (previously ifc) version >=6.0 on Linux

  • pathscale

    Pathscale EKOpath compiler for AMD64 and EM64T

It is important to use the GNU gmake command on many UNIX systems, in stead of the system's own make command which may be incompatible with our Makefile.

More details should follow here...

1.2.6   Installing the pseudopotentials

The dacapo fortran program adds the enviroment variable DACAPOPATH to the pseudopotential filename (if the file is not found in the currebt working directiory). Copy all pseudopotentials to a directory and set the DACAPOPATH environment variable to this directory:

cp psp/*/*/*.pseudo /some/directory/
setenv DACAPOPATH /some/directory/
1.2.7   dacapo python interface
  1. Get the latest version of the dacapo python interface.

    1. From tar file
    2. cvs:
      cvs checkout dacapo/Python
  1. for case 1 unpack the tar file and:

    [home] $ cd Dacapo

    (for case 2 cd dacapo/Python)

  2. and install with the standard setup.py script (if you have root permission):

    [Python] $ python setup.py install

    If you do not have root permission use:

    [Python] $ python setup.py install  --prefix=/some/where/in/your/path

    In this latter case you must set your PYTHONPATH environment variable, as directed by the setup.py script.

  3. Alternative to step 3 simply set the PYTHONPATH environment variable to your cvs directory.

1.3   CVS access

You can access the code from CVS. See the Campos CVS Page.

After the CVS login, get the dacapo fortran source using:

cvs -d :pserver:USERID@cvs.fysik.dtu.dk:/home/camp/CVSROOT checkout dacapo/src

Get the pseudopotentials using:

cvs -d :pserver:USERID@cvs.fysik.dtu.dk:/home/camp/CVSROOT checkout dacapo/psp

Get the Python interface using:

cvs -d :pserver:USERID@cvs.fysik.dtu.dk:/home/camp/CVSROOT checkout dacapo/Python

2   Running Dacapo in parallel

Dacapo can run in parallel using the MPI parallel library. You need to compile a parallel executable:

gmake <arch> MP=mpi

For getting dacapo to work in parallel with ASE you need to make a script dacapo.run, which should be executable and in your path. dacapo.run is an example of such a script. This example use the LAM/MPI and the PBS batch system.

If you do not use a batch system you can replace the line:

MACHINEFILE=$PBS_NODEFILE

with a explicit file containing the names of the nodes, one on each line:

MACHINEFILE=/your/machine/file

2.1   OpenMPI

For dacapo to run together with ASE you need a dacapo.run script in your path, that will start the correct dacapo executable. This OpenMPI dacapo.run script assumes you are running OpenMPI using the PBS batch system.

You might have to edit the location and names of the serial and parallel executable in this script, i.e the lines:

# Name of serial and parallel DACAPO executables
DACAPOEXE="dacapo_2.7.7.run"
DACAPOEXE_PAR="dacapo_2.7.7_mpi.run"

If OpenMPI is not installed under /usr you will also have to change this in the script.

Copy this script to /usr/local/bin/dacapo.run.

3   Notes for installation on specific computers

Dacapo can be built on a large number of different systems and compilers. This portability has been evolving over the years, and the supported systems are displayed by the command make in the top level source code directory. Below we give specific instructions for some systems which we actively use at our site.

If you would like to contribute new entries to the Makefile, correct errors, or add complete instructions for a new platform to the present Wiki page, please send an E-mail to support@fysik.dtu.dk.

3.1   Opteron (Pathscale EKOpath compiler)

This build assumes you have the Pathscale EKOpath Fortran compiler, the OpenMPI message passing library and the ACML Math library installed. Below follows details on how to build the NetCDF and FFTW libraries needed byDdacapo.

3.1.1   NetCDF (Network common Data Format)

Download the NetCDF software and read the NetCDF installation instructions.

Build netcdf like this:

tar -zxf netcdf-3.6.1.tar.gz
cd netcdf-3.6.1/src
./configure --prefix=/usr FC=pathf90 FCFLAGS=-byteswapio CC=pathcc CXX=pathCC CPPFLAGS='-DNDEBUG -DpgiFortran'
make

and then install in /usr running as root:

make install

See also niflheim note on building a netcdf RPM.

3.1.2   FFTW (Fast Fourier Transform library)

Download version 2.1.5 here Build FFTW like this:

tar -zxf fftw-2.1.5.tar.gz
cd fftw-2.1.5
./configure F77=pathf90 CC=pathcc CFLAGS=-O3 FFLAGS=-O3
make

and then as root:

make install

This will install FFTW in /usr/local.

3.1.3   Dacapo

Unpack the dacapo tar-file:

tar -xzf dacapo-2.7.7.tar.gz
cd dacapo-2.7.7/src

Set the environment variables for ACML, NETCDF and FFTW:

setenv ACML /opt/acml3.5.0/pathscale64
setenv NETCDF /usr/local
setenv FFTW /usr/local

Select the location of the MPI library which you want to use. The default is to use the MPI library installed in /usr as set by:

setenv MPIDIR /usr

Alternative locations may be specified, for example:

setenv MPIDIR /usr/local/infinipath-1.3.1

Now compile dacapo (serial and parallel):

make pathscale
make pathscale MP=mpi

Ignore warnings about type mismatch when compiling ms.F, this is due to MPI expecting pointers to integer arrays for all data types.

Now copy the compiled executables to somewhere on your path, e.g.:

cp pathscale_serial/dacapo.run /usr/local/bin/dacapo_<version>.run
cp pathscale_mpi/dacapo.run /usr/local/bin/dacapo_<version>_mpi.run

3.2   Portland Group (PGI) compiler

Here we will build using the Portland Group's PGI Workstation version 6.0-5 Fortran compiler, and a precompiled BLAS library from the ATLAS project.

3.2.1   Atlas BLAS

Take a precompiled version (3.6.0) of ATLAS that fits you system best. The rest of the installation assumes you have installed ATLAS in /usr/local/lib.

The precompiled version of ATLAS includes 2 (double) underscores for Fortran symbols, so for the rest of the Fortran compilations use the pgf90 flag -Msecond_underscore.

3.2.2   NetCDF (Network common Data Format)

Download the NetCDF software and read the NetCDF installation instructions.

Build netcdf like this:

tar -zxf netcdf-3.6.1.tar.gz
cd netcdf-3.6.1/src
./configure --prefix=/usr/local FC=pgf90 FCFLAGS='-byteswapio -Msecond_underscore' CC=pgcc CXX=pgCC CPPFLAGS='-DNDEBUG -DpgiFortran'
make

and then install as the root superuser:

make install
3.2.3   FFTW (Fast Fourier Transform library)

Download FFTW version 2.1.5 and build FFTW like this:

tar -zxf fftw-2.1.5.tar.gz
cd fftw-2.1.5
./configure --prefix=/usr/local F77=pgf90 CC=pgcc CFLAGS=-O3 FFLAGS='-O3 -Msecond_underscore'
make

and then install as the root superuser:

make install
3.2.4   LAM-MPI

We will here use the MPI library LAM-MPI

Download the tarfile and build LAM-MPI for installation in /usr/local/lam-7.1.2-pgi and with the ssh remote-shell command, using:

tar xzvf lam-7.1.2.tar.gz
cd lam-7.1.2
setenv CPPFLAGS '-DNDEBUG -Df2cFortran'
setenv F77 pgf90
setenv FC  pgf90
setenv FFLAGS '-byteswapio -Msecond_underscore'
./configure --prefix=/usr/local/lam-7.1.2-pgi/ --with-rsh="/usr/bin/ssh -a -x" CPPFLAGS='-DNDEBUG -Df2cFortran'  F77=pgf90 FC=pgf90  FFLAGS='-byteswapio -Msecond_underscore'
make

and then install as the root superuser:

make install
3.2.5   Dacapo

Get and unpack the Dacapo tar file:

tar xzvf Dacapo-2.7.7.tar.gz
cd Dacapo-2.7.7/src

or check the Dacapo code out from CVS:

cvs checkout dacapo/src
cd dacapo/src

Set the environment variables to use specific versions of LAPACK (PGI compiler version 6.2) and ATLAS BLAS:

setenv BLASLAPACK '/usr/pgi/linux86/6.2/lib/liblapack.a -L/usr/local/lib -lcblas -lf77blas -latlas'
setenv NETCDF /usr/local
setenv FFTW /usr/local
setenv MPIDIR /usr/local/lam-7.1.2-pgi/

and compile like:

make pglinux
make pglinux MP=mpi

If you get a runtime error from Dacapo similar to this one:

relocation error: /usr/pgi/linux86/6.2/lib/libpthread.so.0: symbol _h_errno, version GLIBC_2.0 not defined in file libc.so.6 with link time reference

then this may possibly be a problem with the PGI compiler installation. You can use ldd dacapo.run to examine which shared libraries are needed, and if libpthread.so.0 in the /usr/pgi tree is referenced, the recommended solution is to remove the soft-link /usr/pgi/linux86/6.2/lib/libpthread.so.0 (or whatever version you have).

3.3   Intel compiler

The Intel Fortran and C/C++ compilers are installed on Intel CPUs such as Pentium-4, Xeon, Itanium etc. The Intel MKL library contains highly optimized BLAS subroutines, among many other things. It is strongly recommended that you use the latest version of the Intel compilers, since many bugs in the past have caused a number of problems for Dacapo and other software packages.

Homepages: Intel Fortran and Intel C++ Compiler version 9.1.

Manuals in PDF format are in the Intel Fortran guide and the Intel C++ guide.

3.3.1   Intel Math Kernel Library (MKL)

The Intel Math Kernel Library (MKL) Version 9.0 with MKL Manuals. A MKL User Forum for the Intel Math Kernel Library is available.

MKL 9.0 contains may of the libraries required by Dacapo: BLAS, LAPACK and FFTW (see these notes).

The FFTW library must be manually built (by the root superuser) according to the notes FFTW2.x to Intel(R) Math Kernel Library Wrappers. For example, if MKL is installed in /opt/intel/mkl/9.0, the 32-bit FFTW library is built like this:

cd /opt/intel/mkl/9.0/interfaces/fftw2xf
make lib32

Now FFTW can be linked in using these loader flags:

-L/opt/intel/mkl/9.0/lib/32 -lfftw2xf_intel
3.3.2   NetCDF (Network common Data Format)

Download the NetCDF software and read the NetCDF installation instructions.

Build netcdf like for the Intel compiler with:

tar -zxf netcdf-3.6.1.tar.gz
cd netcdf-3.6.1/src
./configure --prefix=/usr/local/ifort FC=ifort CC=icc CXX=icpc CPPFLAGS='-DNDEBUG -DpgiFortran'
make

and then install in /usr/local/ifort running as root:

make install
3.3.3   FFTW (Fast Fourier Transform library)

The Intel MKL version of FFTW should be used as discussed above.

If for some reason you can't use MKL's FFTW, you can configure the FFTW library build with:

./configure --prefix=/usr/local/ifort F77=ifort CC=icc CFLAGS=-O3 FFLAGS=-O3
3.3.4   OpenMPI

Get the source tar file from OpenMPI. The version now installed is 1.1.2. You must use the Intel C++ compiler version October 5, 2006 (build 44) or later, see an OpenMPI FAQ.

Build OpenMPI with support for Torque (installation in /usr/local) and the Intel compilers using:

./configure --prefix=/usr/local/openmpi-1.1.2-intel --with-tm=/usr/local CC=icc CXX=icpc FC=ifort F77=ifort
make
3.3.5   Dacapo

You must use the Intel compiler version 9.1 (or later). If you use older versions of the compiler, bugs will give you a lot of troubles.

Check the Dacapo code out from CVS:

cvs checkout dacapo/src
cd dacapo/src

Edit the Makefile section named intellinux if you want to modify the compiler flags to generate optimal code for your particular Intel CPU, where the -x flag controls the code generation (see man ifort):

INTELLINUX_OPT = -O3 -xN

You should also select the Intel MKL library for your specific Intel CPU architecture:

# Intel MKL library (32, 64 or em64t):
MKLPATH=/opt/intel/mkl/9.0/lib/32

Set the environment variables to use specific versions Netcdf, FFTW and MPI:

setenv NETCDF /usr/local/ifort
setenv MPIDIR /usr/local/openmpi-1.1.2-intel
3.3.6   Running Dacapo

Execute the following command to increase the resource limits before running Dacapo:

ulimit -s unlimited

You can set such commands in /etc/profile.local (or similar, depending on Linux distribution) which will be automatically executed at a (remote) login. (Thanks to Lin Zhuang <lzhuang@whu.edu.cn>).