This page describes installation of application software used by the CAMD group on Niflheim.
- Cloning of nodes
- NetCDF (network Common data Form)
- OpenMPI (A High Performance Message Passing Library)
- Campos ASE (Atomic Simulation Environment)
- RPM for Numeric
- MKL and ACML RPMs
The NIFLHEIM cluster uses the SystemImager toolkit on a central server to create an image of a Golden Client node that has been installed in the usual way using a distribution on CD-ROM (Centos Linux in our case). The SystemImager is subsequently used to install identical images of the Golden Client on all of the nodes (changing of course hostname and network parameters).
NetCDF (network Common Data Form) is an interface for array-oriented data access and a library that provides an implementation of the interface. The NetCDF library also defines a machine-independent format for representing scientific data.
NetCDF is installed on the cluster via the rpm netcdf-3.6.1-1.2.el4.fys. This build includes bindings to the PathScale fortran compiler. A locally developed spec file for the NetCDF rpm is in:
OpenMPI is an open source implementation of the MPI message passing standard.
The OpenMPI version now installed is 1.1.1 and is built with support for:
* Torque task-manager (TM) interface (installation in /usr/local) * gcc C compiler * Pathscale EKOpath Fortran compiler (Opteron) or PGI compiler (Intel Pentium-4).
The version on the cluster is installed using the RPM package openmpi-1.1.1-2, which is built using the buildrpms.sh script. Download this script to the working directory (/home/camp/rpmbuild/SOURCES) and customize it for our environment:
Pathscale EKOpath compiler (AMD Opteron):prefix="/usr/local" configure_options="--with-tm=/usr/local FC=pathf90 F77=pathf90"
PGI compiler (Intel Pentium-4):prefix="/usr/local/openmpi-1.1.1" configure_options="--with-tm=/usr/local FC=pgf90 F77=pgf90 CC=pgcc CXX=pgCC"
openmpi-1.1.4:prefix variable is not used! configure_options="--with-tm=/usr/local FC=pgf90 F77=pgf90 CC=pgcc CXX=pgCC CFLAGS=-Msignextend CXXFLAGS=-signextent --with-wrapper-cflags=-Msignextend --with-wrapper-cxxflags=-Msignextend FFLAGS=-Msignextend FCFLAGS=-signextent --with-wrapper-fflags=-Msignextend --with-wrapper-fcflags=-Msignextend" rpmbuild_options="--define 'install_in_opt 1' --define 'install_profile_d_scripts 1'"
The rpmbuild_options are needed to install openmpi into /opt/openmpi/1.1.4 and the configuration scrips into /etc/profile.d. The resulting RPM package will have the dependency on libpgc.so, so make sure that the pgi library is installed on the nodes.
Change the build parameters to build one single RPM including man pages (The build of multiple RPMs did not seem to work in version 1.1.1):
build_srpm=no build_single=yes build_multiple=no
buildrpms.sh needs the RPM .spec file which can be copied from the unpacked openmpi tar-file:
cp openmpi-1.1.1/contrib/dist/linux/openmpi.spec /home/camp/rpmbuild/SOURCES/
cd /home/camp/rpmbuild/SOURCES/ ./buildrpms.sh openmpi-1.1.1.tar.gz
For some parallel libraries like BLACS and ScaLAPACK which are using OpenMPI as the communication layer, setting processor affinity seems to be important. Processor affinity is when a process is fixed on a specific processor, see more details in this OpenMPI FAQ.
To enable processor affinity use:
mpirun --mca mpi_paffinity_alone 1 -np 4 a.out
Without this option all copies of the parallel program (siesta) would land on one CPU.
We install Campos ASE from CVS and use Python to build RPMs by:
cd CamposASE2 python setup.py bdist_rpm
The resulting architecture-independent RPM file should be copied to /home/camp/rpmbuild/RPMS:
cp build/bdist.linux-x86_64/rpm/RPMS/campos-ase-2.3.2-1.noarch.rpm /home/camp/rpmbuild/RPMS
(this was made on an x86_64 architecture machine).
Start by checking the latest version out from CVS:
cvs co dacapo/Python cd dacapo/Python
Update the file setup.py with the correct version and release numbers (Lars: how do we know these ??) like this:
setup(name = 'Dacapo', version='0.9',
Now build a tar-file using:
python setup.py sdist
The tar-file will be in dist/Dacapo-<version>.tar.gz, copy this file to the rpmbuild directory /home/camp/rpmbuild/SOURCES, and build the RPM package using:
rpmbuild -bb campos-dacapo-python.spec
The architecture independent RPM campos-dacapo-python is placed in /home/camp/rpmbuild/RPMS.
An RPM package containing only the binary executables /usr/local/bin/dacapo... is built.
Follow the steps for compiling dacapo on opteron nodes, end after building the binary executables:
cd dacapo-<version>/src make pathscale make pathscale MP=mpi
Now make a tar file of the directory dacapo-<version> (<version> may be 2.7.7 or later). Create a tar-ball in the RPM SOURCES directory:
cd ../.. tar zcf /home/camp/rpmbuild/SOURCES/campos-dacapo-<version>-opteron.tar.gz dacapo-<version>
Use the spec file in /home/camp/rpmbuild/SPECS/campos-dacapo-opteron.spec, and remember to update version and release numbers in the spec file (Lars: How do we know the version number ??). Now build the RPM package by:
rpmbuild -bb campos-dacapo-opteron.spec
The spec file /home/camp/rpmbuild/SPEC/campos-dacapo-pgi.spec will build the i386 version of dacapo on svol, current version is 2.7.8. Follow the steps for compiling dacapo on svol, end with:
gmake pglinux MP=mpi gmake pglinux
in the directory dacapo/src. Now make a tarfile using:
cd ../.. tar -zcf dacapo_<version>.tar.gz dacapo/src
Copy this file to /home/camp/rpmbuild/SOURCES/ and build the rpm using:
rpmbuild -bb /home/camp/rpmbuild/SPECS/campos-dacapo-pgi.spec
Make sure that the version number and release number are correct in the spec file.
The SIESTA setup at niflheim consists of the following rpms:
spec file /home/camp/rpmbuild/SPECS/siesta2.spec. This will build the serial version of siesta2.
spec file /home/camp/rpmbuild/SPECS/siesta2-openmpi.spec This will build the parallel siesta version2.
/home/camp/rpmbuild/specs/campos-siesta-pseudopotentials.spec, spec file on local disktops. This will build the small local fys pseudopotential database, mainly for initial tryout of SIESTA.
The serial SIESTA is build using the ACML library, using the spec file siesta2.spec. The build part looks like:
%build cd Src FC="pathf90" FCFLAGS="-O2 -OPT:Olimit=0" \ LDFLAGS="/opt/pathscale//lib/2.4/libpathfortran.a" \ %configure --with-lapack=/opt/acml3.5.0//pathscale64/lib/libacml.a --build='x86_64' --with-blas=/opt/acml3.5.0//pathscale64/lib/libacml.a
The parallel version of SIESTA requires BLACS (Basic Linear Algebra Communication Subprograms) and ScalaPack, plus a MPI message passing library. Here we build SIESTA with OpenMPI. See also the niflheim note on OpenMPI.
* mpiblacs.tgz MPI version of the BLACS. UPDATED: May 5, 1997 (Version 1.1) * mpiblacs-patch03.tgz MPIBLACS patch!! Details in old_errata.blacs file. To install: gunzip -c mpiblacs-patch03.tgz | tar xvf - Date: February 24, 2000 * blactester.tgz
These files are installed in /home/camp/rpmbuild/blacs/BLACS.
BLACS is build ontop of a MPI library, here we use the OpenMPI library. The configuration file in BLACS/Bmake.inc, is edited following the OpenMPI BLACS FAQ. For more details see this FAQ and the Bmake.inc file.
Setting Processor infinity also seems important, then running a program based on BLACS.
The tar file scalapack-1.7.4.tgz from ScaLAPACK is used and installed in /home/camp/rpmbuild/scalapack/scalapack-1.7.4.
The configuration file scalapack-1.7.4/SLmake.inc is edited following the OpenMPI ScaLAPACK FAQ For more details see this FAQ and the SLmake.inc file.
Finally the parallel siesta2 is build using the spec file siesta2-openmpi.spec. The tar file for siesta2 is in /home/camp/rpmbuild/SOURCES/siesta-2.0.tgz. The configuration file for this build is in /home/camp/rpmbuild/SOURCES/arch.make.
From Magnus Paulsson <email@example.com> we have received the following advice about building SIESTA:
These atom.f module should be compiled with zero optimization (-O0).
The dhscf.F module should be compiled with -O2.
The vmat.f, vmatsp.f, rhoofd.f, cellxc.F modules should be optimized with -O3.
In MPI/mpi.F remove the #ifdef OLD_CRAY part.
When using the OpenMPI MPI-library, please use version 1.1.2 (or later) since there were problems with version 1.1.1.
You can specify the optimizations in the Makefile:
atom.o: $(FC) -c -m64 -O0 atom.f dhscf.o: $(FC) -c -m64 -O2 $(INCFLAGS) $(FPPFLAGS) $(FCFLAGS_fixed_f) dhscf.F vmat.o: $(FC) -c -m64 -O3 $(INCFLAGS) $(FCFLAGS_fixed_f) vmat.f vmatsp.o: $(FC) -c -m64 -O3 $(INCFLAGS) $(FCFLAGS_fixed_f) vmatsp.f rhoofd.o: $(FC) -c -m64 -O3 $(INCFLAGS) $(FCFLAGS_fixed_f) rhoofd.f cellxc.o: $(FC) -c -m64 -O3 $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_fixed_F) cellxc.F
Unpack the tar-file Numeric-24.2.tar.gz somewhere. For the P4 s50 nodes, we link to MKL, so add this section to customize.py:
use_system_lapack = 1 lapack_library_dirs = ['/opt/intel/mkl/9.0/lib/32'] lapack_libraries = ['mkl', 'mkl_lapack', 'g2c'] use_dotblas = 1 dotblas_include_dirs = ['/opt/intel/mkl/9.0/include'] dotblas_cblas_header = '<mkl_cblas.h>'
For opterons, we use ACML:
use_system_lapack = 1 lapack_library_dirs = ['/opt/acml3.5.0/gnu64/lib/'] lapack_libraries = ['acml','g2c'] use_dotblas = 1 dotblas_libraries = ['cblas','acml','g2c']
For historical reasons, the name of the s50 rpm is python-numeric and the opteron rpm is called Numeric (which is the default), so you should edit setup.py so that the name of the rpm for the s50's will be python-numeric:
setup (name = "python-numeric",
After a python setup.py bdist_rpm (on thul/svol for s50 and slid for opteron) you will find the new RPM in the build/bdist.xxx/rpm/RPMS directory. Copy it to /home/niflheim/rpmbuild/RPMS.xxx (xxx is s50 of opteron) and distribute it to all nodes.
Since Numeric is linked to MKL/ACML, we need packages for those libraries too. Spec-files are found in /home/niflheim/rpmbuild/SPECS and source-tarballs are in /home/niflheim/rpmbuild/SOURCES. RPM's are built by user rpmbuild like this:
cd /home/niflheim/rpmbuild/SPECS rpmbuild -bb acml.spec rpmbuild -bb niflheim-s50-mkl.spec
After that, the RPMs should be moved from the RPMS directory to one of RPMS.s50 or RPMS.opteron and then they should be distributed.