Differences between revisions 19 and 20
Revision 19 as of 2006-09-28 07:47:41
Size: 7347
Editor: planck
Comment:
Revision 20 as of 2006-09-28 07:59:07
Size: 7993
Editor: planck
Comment:
Deletions are marked like this. Additions are marked like this.
Line 65: Line 65:
/home/camp/rpmbuild/SOURCES/
Line 198: Line 198:
The configuration file in `BLACS/Bmake.inc`, is edited following this
OpenMPI `FAQ <http://www.open-mpi.org/faq/?category=mpi-apps#blacs>`_.
  For more details see the FAQ and the Bmake.inc file.

Setting `Processor infinity <Cluster_software#processor-affinity>` also seems important.

BLACS is build ontop of a MPI library, here we use the OpenMPI library.
The configuration file in `BLACS/Bmake.inc`, is edited following the
`
OpenMPI BLACS FAQ <http://www.open-mpi.org/faq/?category=mpi-apps#blacs>`_.
For more details see this FAQ and the `Bmake.inc` file.

Setting `Processor infinity <Cluster_software#processor-affinity>`_ also seems important,
then running a program based on BLACS.
Line 211: Line 213:
`/home/camp/rpmbuild/scalapack/`.


The configuration file /
http://www.open-mpi.org/faq/?category=mpi-apps#scalapack

`/home/camp/rpmbuild/scalapack/scalapack-1.7.4`.


The configuration file `scalapack-1.7.4/SLmake.inc` is edited following the
`OpenMPI ScaLAPACK FAQ <http://www.open-mpi.org/faq/?category=mpi-apps#scalapack>`_
For more details see this FAQ and the `SLmake.inc` file.


````````````````````````````
Building the parallel siesta
````````````````````````````

Finally the parallel siesta2 is build using the spec file `siesta2-openmpi.spec`.
The tar file for siesta2 is in `/home/camp/rpmbuild/SOURCES/siesta-2.0.tgz`.
The configuration file for this build is in `/home/camp/rpmbuild/SOURCES/arch.make`.

Cluster software

Cloning of nodes

The NIFLHEIM cluster uses the SystemImager toolkit on a central server to create an image of a Golden Client node that has been installed in the usual way using a distribution on CD-ROM (Centos Linux in our case). The SystemImager is subsequently used to install identical images of the Golden Client on all of the nodes (changing of course hostname and network parameters).

NetCDF (network Common data Form)

NetCDF (network Common Data Form) is an interface for array-oriented data access and a library that provides an implementation of the interface. The NetCDF library also defines a machine-independent format for representing scientific data.

NetCDF is installed on the cluster via the rpm netcdf-3.6.1-1.2.el4.fys. This build includes bindings to the PathScale fortran compiler. A locally developed spec file for the NetCDF rpm is in:

~rpmbuild/SPECS/netcdf.spec

OpenMPI (A High Performance Message Passing Library)

OpenMPI is an open source implementation of the MPI message passing standard.

The version now installed is 1.1.1 and is build with support for:

* Torque (installation in /usr/local)
* gcc c compiler
* pathscale fortran compiler

The version on the cluster is installed using the rpm openmpi-1.1.1-2. This rpm is build using the buildrpms.sh script from the page.

This is done by modifying the buildrpms.sh script. Change the following lines:

prefix="/usr/local"
configure_options="--with-tm=/usr/local FC=pathf90 F77=pathf90"

buildrpms.sh is used to build one single rpm, including man pages:

build_srpm=no
build_single=yes
build_multiple=no

The build of multiple rpms did not seem to work in version 1.1.1.

buildrpms.sh needs the rpm spec file, this can be copied from the unpacked openmpi tar file:

cp openmpi-1.1.1/contrib/dist/linux/openmpi.spec .

Now run:: /home/camp/rpmbuild/SOURCES/

System Message: ERROR/3 (<string>, line 66)

Unexpected indentation.
./buildrpms.sh openmpi-<version>.tar.gz

Processor affinity

For some parallel libraries like BLACS and ScaLAPACK which are using OpenMPI as the communication layer, setting processor affinity seems to be important. Processor affinity is when a process is fixed on a specific processor, see more details in this OpenMPI FAQ.

To enable processor affinity use:

mpirun --mca mpi_paffinity_alone 1 -np 4 a.out

Without this option all copies of the parallel program (siesta) would land on one CPU.

Dacapo

Making RPM for the dacapo python interface

Start by checking the new version out from CVS:

cvs co dacapo/Python
cd dacapo/Python

Update the file setup.py with the correct version and release numbers (Lars: how do we know these ??) like this:

setup(name = 'Dacapo',
    version='0.9',

Now build a tar-file using:

python setup.py sdist

The tar-file will be in dist/Dacapo-<version>.tar.gz, copy this file to the rpmbuild directory /home/camp/rpmbuild/SOURCES, and build the RPM package using:

rpmbuild -bb campos-dacapo-python.spec

The architecture independent RPM campos-dacapo-python is placed in /home/camp/rpmbuild/RPMS.

Building RPM for Opteron nodes

Follow the steps for compiling dacapo on the opteron nodes, end with:

gmake pathscale MP=mpi
gmake pathscale

Now make a tar file of the directory campos-dacapo-<version> and place this in the rpm source directory:

cd ..
tar zcf /home/camp/rpmbuild/SOURCES/campos-dacapo-<version>-opteron.tar.gz campos-dacapo-<version>

Now build the rpm using the spec file in /home/camp/rpmbuild/SPECS/campos-dacapo-opteron.spec, remember to update version and release numbers in the spec file:

rpmbuild -bb campos-dacapo-opteron.spec

SIESTA

The SIESTA setup at niflheim consists of the following rpms:

  • siesta-2.0-3.2.el4.fys

    spec file /home/camp/rpmbuild/SPECS/siesta2.spec. This will build the serial version of siesta2.

  • siesta2-openmpi-2.0-1.2.el4.fys

    spec file /home/camp/rpmbuild/SPECS/siesta2-openmpi.spec This will build the parallel siesta version2.

  • campos-siesta-pseudopotentials-1-1.2.el4.fys

    /home/camp/rpmbuild/specs/campos-siesta-pseudopotentials.spec, spec file on local disktops. This will build the small local fys pseudopotential database, mainly for initial tryout of SIESTA.

Building serial SIESTA

The serial SIESTA is build using the ACML library, using the spec file siesta2.spec. The build part looks like:

%build
cd Src
FC="pathf90" FCFLAGS="-O2 -OPT:Olimit=0" \
LDFLAGS="/opt/pathscale//lib/2.4/libpathfortran.a" \
%configure --with-lapack=/opt/acml3.5.0//pathscale64/lib/libacml.a --build='x86_64' --with-blas=/opt/acml3.5.0//pathscale64/lib/libacml.a

Buiding parallel SIESTA

The parallel version of SIESTA requires BLACS (Basic Linear Algebra Communication Subprograms) and ScalaPack, plus a MPI message paasing library. Here we build SIESTA with OpenMPI <http://www.open-mpi.org/>. See also the `niflheim note on OpenMPI.

BLACS (Basic Linear Algebra Communication Subprograms)

Since BLACS is not included in ACML (it is included in MKL), we build it from source. The following BLACS tar files are used:

* mpiblacs.tgz
    MPI version of the BLACS.
    UPDATED:  May 5, 1997 (Version 1.1)

* mpiblacs-patch03.tgz
    MPIBLACS patch!! Details in old_errata.blacs file.
    To install: gunzip -c mpiblacs-patch03.tgz | tar xvf -
    Date: February 24, 2000

* blactester.tgz

These files are installed in /home/camp/rpmbuild/blacs/BLACS.

BLACS is build ontop of a MPI library, here we use the OpenMPI library. The configuration file in BLACS/Bmake.inc, is edited following the OpenMPI BLACS FAQ. For more details see this FAQ and the Bmake.inc file.

Setting Processor infinity also seems important, then running a program based on BLACS.

ScaLAPACK

The tar file scalapack-1.7.4.tgz from ScaLAPACK is used and installed in /home/camp/rpmbuild/scalapack/scalapack-1.7.4.

The configuration file scalapack-1.7.4/SLmake.inc is edited following the OpenMPI ScaLAPACK FAQ For more details see this FAQ and the SLmake.inc file.

Building the parallel siesta

Finally the parallel siesta2 is build using the spec file siesta2-openmpi.spec. The tar file for siesta2 is in /home/camp/rpmbuild/SOURCES/siesta-2.0.tgz. The configuration file for this build is in /home/camp/rpmbuild/SOURCES/arch.make.

Niflheim: Cluster_software (last edited 2010-11-04 12:56:52 by OleHolmNielsen)