Differences between revisions 7 and 8
Revision 7 as of 2006-09-20 12:15:02
Size: 3002
Editor: planck
Comment:
Revision 8 as of 2006-09-22 11:08:28
Size: 3017
Comment:
Deletions are marked like this. Additions are marked like this.
Line 18: Line 18:

   
Line 23: Line 21:
`NetCDF <http://www.unidata.ucar.edu/software/netcdf/>`_ (network Common Data Form) is an interface for array-oriented
data access and a library that provides an implementation of the interface. The netCDF library also defines a
`NetCDF <http://www.unidata.ucar.edu/software/netcdf/>`_ (*network Common Data Form*) is an interface for array-oriented
data access and a library that provides an implementation of the interface. The *NetCDF* library also defines a
Line 27: Line 25:
`NetCDF` is installed on the cluster via the rpm `netcdf-3.6.1-1.2.el4.fys`. *NetCDF* is installed on the cluster via the rpm `netcdf-3.6.1-1.2.el4.fys`.
Line 29: Line 27:
Spec file for the rpm is in:: A locally developed *spec file* for the *NetCDF* rpm is in::
Line 31: Line 29:
    ~rpmbild/SPECS/netcdf.spec     ~rpmbuild/SPECS/netcdf.spec
Line 37: Line 35:
`OpenMPI <http://www.open-mpi.org>`_ is a open source implementation of the MPI message passing
standard.
`OpenMPI <http://www.open-mpi.org>`_ is an open source implementation of the MPI message passing standard.
Line 40: Line 37:
The version now installed is 1.1.1 and is build witj suppport for:: The version now installed is 1.1.1 and is build with support for::
Line 48: Line 45:
This rpm is build using the `buildrpms.sh` script from `the page <http://www.open-mpi.org/software/ompi/v1.1/srpm.php>`_. This rpm is build using the ``buildrpms.sh`` script from `the page <http://www.open-mpi.org/software/ompi/v1.1/srpm.php>`_.
Line 50: Line 47:
This is done by modifying the `buildrpms.sh` script. Change the following lines:: This is done by modifying the ``buildrpms.sh`` script. Change the following lines::
Line 55: Line 52:
`buildrpms.sh` is used to build one single rpm, including man pages:: ``buildrpms.sh`` is used to build one single rpm, including man pages::
Line 63: Line 60:
`buildrpms.sh` needs the rpm spec file, this can be copied from the
unpacked openmpi tar file::
``buildrpms.sh`` needs the rpm spec file, this can be copied from the unpacked openmpi tar file::
Line 72: Line 68:
``````````````````
Line 74: Line 70:
`````````````````` ==================

Cluster software

Cloning of nodes

The NIFLHEIM cluster uses the SystemImager toolkit on a central server to create an image of a Golden Client node that has been installed in the usual way using a distribution on CD-ROM (Centos Linux in our case). The SystemImager is subsequently used to install identical images of the Golden Client on all of the nodes (changing of course hostname and network parameters).

NetCDF (network Common data Form)

NetCDF (network Common Data Form) is an interface for array-oriented data access and a library that provides an implementation of the interface. The NetCDF library also defines a machine-independent format for representing scientific data.

NetCDF is installed on the cluster via the rpm netcdf-3.6.1-1.2.el4.fys. This build includes bindings to the PathScale fortran compiler. A locally developed spec file for the NetCDF rpm is in:

~rpmbuild/SPECS/netcdf.spec

OpenMPI (A High Performance Message Passing Library)

OpenMPI is an open source implementation of the MPI message passing standard.

The version now installed is 1.1.1 and is build with support for:

* Torque (installation in /usr/local)
* gcc c compiler
* pathscale fortran compiler

The version on the cluster is installed using the rpm openmpi-1.1.1-2. This rpm is build using the buildrpms.sh script from the page.

This is done by modifying the buildrpms.sh script. Change the following lines:

prefix="/usr/local"
configure_options="--with-tm=/usr/local FC=pathf90 F77=pathf90"

buildrpms.sh is used to build one single rpm, including man pages:

build_srpm=no
build_single=yes
build_multiple=no

The build of multiple rpms did not seem to work in version 1.1.1.

buildrpms.sh needs the rpm spec file, this can be copied from the unpacked openmpi tar file:

cp openmpi-1.1.1/contrib/dist/linux/openmpi.spec .

Now run:

./buildrpms.sh openmpi-<version>.tar.gz

Processor affinity

For some parallel libraries like BLACS and ScaLAPACK which are using OpenMPI as the communication layer, setting processor affinity seems to be important. Processor affinity is when a process is fixed on a specific processor, see more details in this OpenMPI FAQ.

To enable processor affinity use:

mpirun --mca mpi_paffinity_alone 1 -np 4 a.out

Without this option all copies of the parallel program (siesta) would land on one CPU.

Niflheim: Cluster_software (last edited 2010-11-04 12:56:52 by OleHolmNielsen)