Differences between revisions 1 and 8 (spanning 7 versions)
Revision 1 as of 2009-06-06 12:05:58
Size: 11167
Editor: MarcinDulak
Comment:
Revision 8 as of 2009-06-06 12:37:22
Size: 12267
Editor: MarcinDulak
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
This page describes the necessary steps for installing
`gpaw
<https://wiki.fysik.dtu.dk/gpaw>`_ and `dacapo <https://wiki.fysik.dtu.dk/dacapo>`_ programs.
This page describes the necessary steps for installing of dl160
Line 30: Line 29:
- install:: - install, as root::
Line 33: Line 32:

- if not yet done, go to `configuring rpmbuild <Building_a_Cluster_-_Tutorial/configuring_rpmbuild>`_,

- **Skip this step if not installing on "dulak-server"**: create the ``/home/dulak-server/rpm/external`` directory (to keep external RPMS)::

   mkdir /home/dulak-server/rpm/external; cd /home/dulak-server/rpm/external

- install official packages::
   # /var directories must be created
   yum search --enablerepo=atrpms arpack-devel
   yum search --enablerepo=epel jmol

- configure rpmbuild:

  - use the following ~rpmbuild/.rpmmacros::

     %disttag el5.fys

     %packager rpmbuild@fysik.dtu.dk
     %distribution Fysik RPMS
     %vendor Fysik RPMS <rpm@fysik.dtu.dk>

     %_signature gpg
     %_gpg_path ~/.gnupg
     %_gpg_name Fysik RPMS

     #%_topdir /home/camp/rpmbuild/AMD-Opteron
     %_topdir /home/camp/rpmbuild/Intel-Nehalem
     %_rpmdir %{_topdir}/RPMS
     %_srcrpmdir %{_topdir}/SRPMS
     %_svndir /home/camp/rpmbuild/rpmbuild
     %_specdir %{_svndir}/SPECS
     %_sourcedir %{_svndir}/SOURCES
     %_rpmfilename %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm
     #%_tmppath %{_topdir}
     %_tmppath /tmp/rpmbuild
     %_builddir %{_tmppath}/BUILD

     %niflheim 1

  - as rpmbuild create directories::

     mkdir -p ~/Intel-Nehalem/RPMS
     mkdir -p ~/Intel-Nehalem/SRPMS
     mkdir -p ~/Intel-Nehalem/BUILD
     mkdir -p ~/Intel-Nehalem/SPECS # needed only by openmpi
     mkdir -p ~/Intel-Nehalem/SOURCES # needed only by openmpi
     mkdir -p /tmp/rpmbuild/BUILD

- install official packages, as rpmbuild::

   cd ~/Intel-Nehalem/RPMS
Line 45: Line 79:
   yum localinstall *

- install `atrpms` packages (``vtk-python`` is currently unavailable 16 Apr 2009)::

   yumdownloader --resolve --enablerepo=atrpms vtk-python fftw2 fftw2-devel netcdf netcdf-devel arpack-devel graphviz
   yum localinstall * # as root

- install `atrpms` packages, as rpmbuild (``vtk-python`` is currently unavailable 16 Apr 2009)::

   ~/Intel-Nehalem/RPMS
   
yumdownloader --resolve --enablerepo=atrpms vtk-python arpack-devel graphviz
Line 51: Line 86:
   rpm --import RPM-GPG-KEY.atrpms
   yum localinstall *

- install the packages from `epel`::

   yumdownloader --resolve --enablerepo=epel fftw3 fftw3-devel python-matplotlib python-docutils jmol
   rpm --import RPM-GPG-KEY.atrpms # as root
   yum localinstall * # as root

- install the packages from `epel`, as rpmbuild::

   ~/Intel-Nehalem/RPMS
   
yumdownloader --resolve --enablerepo=epel jmol
Line 59: Line 95:
   rpm --import RPM-GPG-KEY-EPEL
   yum localinstall *
   rpm --import RPM-GPG-KEY-EPEL # as root
   yum localinstall * # as root
Line 62: Line 98:

- remove default openmpi::

   yum remove openmpi openmpi-libs
Line 70: Line 110:
As root:: As rpmbuild::
Line 74: Line 114:
**Skip this step if not installing on "dulak-server"**: create the ``/home/dulak-server/rpm/campos`` directory (to keep custom built RPMS)::

  mkdir /home/dulak-server/rpm/campos

Preferably build a custom openmpi, using the latest gcc/gfortran and torque support::

  wget http://www.open-mpi.org/software/ompi/v1.3/downloads/openmpi-1.3.1.tar.bz2 \
       -O ~/rpmbuild/SOURCES/openmpi-1.3.1.tar.bz2
  export rpmtopdir=${HOME} # set this to _topdir value from ~/.rpmmacros
  sh ./buildrpm-1.3.1-1.gfortran.sh ../SOURCES/openmpi-1.3.1.tar.bz2 2>&1 | tee buildrpm-1.3.1-1.gfortran.sh.log
Build a custom openmpi, using torque support::

  export rpmtopdir=${HOME}/Intel-Nehalem # set this to _topdir value from ~/.rpmmacros
  wget http://www.open-mpi.org/software/ompi/v1.3/downloads/openmpi-1.3.2.tar.bz2 \
       -O ~/rpmbuild/SOURCES/openmpi-1.3.2.tar.bz2
  sh ./buildrpm-1.3.2-1.gfortran.sh ../SOURCES/openmpi-1.3.2.tar.bz2 2>&1 | tee buildrpm-1.3.2-1.gfortran.sh.log.Intel-Nehalem
  sh ./buildrpm-1.3.2-1.gfortran43.sh ../SOURCES/openmpi-1.3.2.tar.bz2 2>&1 | tee buildrpm-1.3.2-1.gfortran43.sh.log.Intel-Nehalem
  sh ./buildrpm-1.3.2-1.pathscale.sh ../SOURCES/openmpi-1.3.2.tar.bz2 2>&1 | tee buildrpm-1.3.2-1.pathscale.sh.log.Intel-Nehalem
Line 85: Line 123:
  cp ~/RPMS/*/openmpi-*.rpm /home/dulak-server/rpm/campos
Line 90: Line 127:
    grep -v "#\!" install.sh >> ~/global_install.sh
    cat uninstall.sh ~/global_uninstall.sh | grep -v "#\!" >> ~/global_uninstall.sh.tmp && mv -f ~/global_uninstall.sh.tmp ~/global_uninstall.sh
    # ignore "cat: /root/global_uninstall.sh: No such ..." error when running first time
    grep -v "#\!" install.sh >> ~/Intel-Nehalem/global_install.sh
    cat uninstall.sh ~/Intel-Nehalem/global_uninstall.sh | grep -v "#\!" >> ~/Intel-Nehalem/global_uninstall.sh.tmp && mv -f ~/Intel-Nehalem/global_uninstall.sh.tmp ~/Intel-Nehalem/global_uninstall.sh
    # ignore "cat: .../global_uninstall.sh: No such ..." error when running first time

This page describes the necessary steps for installing of dl160

On the server

Install external packages

As root:

  • create yum repository definitions (do not enable them):

    # atrpms
    echo '[atrpms]' > /etc/yum.repos.d/atrpms.repo
    echo 'name=name=CentOS $releasever - $basearch - ATrpms' >> /etc/yum.repos.d/atrpms.repo
    echo 'baseurl=http://dl.atrpms.net/el$releasever-$basearch/atrpms/stable' >> /etc/yum.repos.d/atrpms.repo
    echo '#baseurl=http://mirrors.ircam.fr/pub/atrpms/el$releasever-$basearch/atrpms/stable' >> /etc/yum.repos.d/atrpms.repo
    echo 'gpgkey=http://ATrpms.net/RPM-GPG-KEY.atrpms' >> /etc/yum.repos.d/atrpms.repo
    echo 'gpgcheck=1' >> /etc/yum.repos.d/atrpms.repo
    echo 'enabled=0' >> /etc/yum.repos.d/atrpms.repo
    # epel
    echo '[epel]' > /etc/yum.repos.d/epel.repo
    echo 'name=name=CentOS $releasever - $basearch - EPEL' >> /etc/yum.repos.d/epel.repo
    echo 'baseurl=http://download.fedora.redhat.com/pub/epel/$releasever/$basearch' >> /etc/yum.repos.d/epel.repo
    echo 'gpgkey=http://download.fedora.redhat.com/pub/epel/RPM-GPG-KEY-EPEL' >> /etc/yum.repos.d/epel.repo
    echo 'gpgcheck=1' >> /etc/yum.repos.d/epel.repo
    echo 'enabled=0' >> /etc/yum.repos.d/epel.repo
  • install, as root:

    yum install yum-utils
    # /var directories must be created
    yum search --enablerepo=atrpms arpack-devel
    yum search --enablerepo=epel jmol
  • configure rpmbuild:

    • use the following ~rpmbuild/.rpmmacros:

      %disttag        el5.fys
      
      %packager       rpmbuild@fysik.dtu.dk
      %distribution   Fysik RPMS
      %vendor         Fysik RPMS <rpm@fysik.dtu.dk>
      
      %_signature     gpg
      %_gpg_path      ~/.gnupg
      %_gpg_name      Fysik RPMS
      
      #%_topdir       /home/camp/rpmbuild/AMD-Opteron
      %_topdir        /home/camp/rpmbuild/Intel-Nehalem
      %_rpmdir        %{_topdir}/RPMS
      %_srcrpmdir     %{_topdir}/SRPMS
      %_svndir        /home/camp/rpmbuild/rpmbuild
      %_specdir       %{_svndir}/SPECS
      %_sourcedir     %{_svndir}/SOURCES
      %_rpmfilename   %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm
      #%_tmppath      %{_topdir}
      %_tmppath       /tmp/rpmbuild
      %_builddir      %{_tmppath}/BUILD
      
      %niflheim       1
    • as rpmbuild create directories:

      mkdir -p ~/Intel-Nehalem/RPMS
      mkdir -p ~/Intel-Nehalem/SRPMS
      mkdir -p ~/Intel-Nehalem/BUILD
      mkdir -p ~/Intel-Nehalem/SPECS # needed only by openmpi
      mkdir -p ~/Intel-Nehalem/SOURCES # needed only by openmpi
      mkdir -p /tmp/rpmbuild/BUILD
  • install official packages, as rpmbuild:

    cd ~/Intel-Nehalem/RPMS
    yumdownloader --resolve gcc-gfortran gcc43-c++ gcc43-gfortran blas-devel lapack-devel python-devel
    yumdownloader --resolve gnuplot libXi-devel xorg-x11-fonts-100dpi pexpect tetex-latex tkinter qt-devel
    yumdownloader --resolve openmpi openmpi-devel openmpi-libs compat-dapl libibverbs librdmacm openib
    yum localinstall * # as root
  • install atrpms packages, as rpmbuild (vtk-python is currently unavailable 16 Apr 2009):

    ~/Intel-Nehalem/RPMS
    yumdownloader --resolve --enablerepo=atrpms vtk-python arpack-devel graphviz
    wget http://ATrpms.net/RPM-GPG-KEY.atrpms
    rpm --import RPM-GPG-KEY.atrpms # as root
    yum localinstall * # as root
  • install the packages from epel, as rpmbuild:

    ~/Intel-Nehalem/RPMS
    yumdownloader --resolve --enablerepo=epel jmol
    yumdownloader --resolve --enablerepo=epel environment-modules suitesparse-devel
    wget http://download.fedora.redhat.com/pub/epel/RPM-GPG-KEY-EPEL
    rpm --import RPM-GPG-KEY-EPEL # as root
    yum localinstall * # as root
    source /etc/profile.d/modules.sh
  • remove default openmpi:

    yum remove openmpi openmpi-libs
  • edit /etc/yum.conf so it contains:

    exclude=netcdf-* netcdf3-* fftw-* fftw2-* fftw3-* python-numeric openmpi-*

It's time to build custom RPMS

As rpmbuild:

cd ~/rpmbuild/SPECS

Build a custom openmpi, using torque support:

export rpmtopdir=${HOME}/Intel-Nehalem # set this to _topdir value from ~/.rpmmacros
wget http://www.open-mpi.org/software/ompi/v1.3/downloads/openmpi-1.3.2.tar.bz2 \
     -O ~/rpmbuild/SOURCES/openmpi-1.3.2.tar.bz2
sh ./buildrpm-1.3.2-1.gfortran.sh ../SOURCES/openmpi-1.3.2.tar.bz2 2>&1 | tee buildrpm-1.3.2-1.gfortran.sh.log.Intel-Nehalem
sh ./buildrpm-1.3.2-1.gfortran43.sh ../SOURCES/openmpi-1.3.2.tar.bz2 2>&1 | tee buildrpm-1.3.2-1.gfortran43.sh.log.Intel-Nehalem
sh ./buildrpm-1.3.2-1.pathscale.sh ../SOURCES/openmpi-1.3.2.tar.bz2 2>&1 | tee buildrpm-1.3.2-1.pathscale.sh.log.Intel-Nehalem
rpm -ivh ~/RPMS/*/openmpi-*.rpm

If scripts that contain ALL build/install/uninstall commands (global_install.sh and global_uninstall.sh) need to be created, every time after an RPM is successfully built, do:

grep -v "#\!" install.sh >> ~/Intel-Nehalem/global_install.sh
cat uninstall.sh ~/Intel-Nehalem/global_uninstall.sh | grep -v "#\!" >> ~/Intel-Nehalem/global_uninstall.sh.tmp && mv -f ~/Intel-Nehalem/global_uninstall.sh.tmp ~/Intel-Nehalem/global_uninstall.sh
# ignore "cat: .../global_uninstall.sh: No such ..." error when running first time

Note that global_uninstall.sh won't remove built RPM files, just will uninstall the packages.

Build the following for dacapo:

  • campos-dacapo-pseudopotentials:

    python campos_installer.py --machine='dulak-cluster' --create_scripts campos-dacapo-pseudopotentials
  • rasmol:

    python campos_installer.py --machine='dulak-cluster' --create_scripts RasMol
    cp ~/RPMS/*/RasMol-*.rpm /home/dulak-server/rpm/campos
  • gnuplot-py:

    python campos_installer.py --machine='dulak-cluster' --create_scripts gnuplot-py
    cp ~/RPMS/*/gnuplot-py-*.rpm /home/dulak-server/rpm/campos

    if you use modules:

    module load gnuplot-py
    echo "module load gnuplot-py" >> ~/global_install.sh

    otherwise logout and login again!

  • cblas:

    python campos_installer.py --machine='dulak-cluster' --create_scripts cblas
    cp ~/RPMS/*/cblas-*.rpm /home/dulak-server/rpm/campos
  • python-numeric (we must install 24.2 version, and we keep the default version):

    cd
    rpm -e --nodeps python-numeric
    yumdownloader --resolve --disableexcludes=main python-numeric
    cp python-numeric-*.rpm /home/dulak-server/rpm/external # **Skip this step if not installing on "dulak-server"**
    cd ~/rpmbuild/SPECS
    python campos_installer.py --machine='dulak-cluster' --create_scripts python-numeric
    cp ~/RPMS/*/python-numeric-*.rpm /home/dulak-server/rpm/campos

    Note: (16 Apr 2009) currently Numeric's test.py results in (we ignore this error):

    glibc detected *** python: free(): invalid next size (normal): 0x09aee970 ***

    If you use modules:

    module load python-numeric
    echo "module load python-numeric" >> ~/global_install.sh

    otherwise logout and login again!

    After installing python-numeric make a very rough check:

    python -c "import lapack_lite"
    ldd `rpm -ql python-numeric | grep lapack_lite.so`
    ldd `rpm -ql python-numeric | grep _dotblas.so`

    and reinstall the default version:

    rpm -ivh --oldpackage ~/python-numeric-*.rpm
  • ScientificPython:

    python campos_installer.py --machine='dulak-cluster' --create_scripts ScientificPython
    cp ~/RPMS/*/ScientificPython-*.rpm /home/dulak-server/rpm/campos
  • campos-ase2:

    python campos_installer.py --machine='dulak-cluster' --create_scripts campos-ase2
    cp ~/RPMS/*/campos-ase2-*.rpm /home/dulak-server/rpm/campos
  • campos-dacapo-python:

    python campos_installer.py --machine='dulak-cluster' --create_scripts campos-dacapo-python
  • campos-dacapo:

    python campos_installer.py --machine='dulak-cluster' --create_scripts --compiler=gfortran43 campos-dacapo
    cp ~/RPMS/*/campos-dacapo-*.rpm /home/dulak-server/rpm/campos

    logout and login again!

build following for gpaw:

  • campos-gpaw-setups:

    python campos_installer.py --machine='dulak-cluster' --create_scripts campos-gpaw-setups
  • campos-ase3:

    python campos_installer.py --machine='dulak-cluster' --create_scripts campos-ase3
    cp ~/RPMS/*/campos-ase3-*.rpm /home/dulak-server/rpm/campos
  • campos-gpaw:

    python campos_installer.py --machine='dulak-cluster' --create_scripts --compiler=gfortran43 campos-gpaw
    cp ~/RPMS/*/campos-gpaw-*.rpm /home/dulak-server/rpm/campos

    logout and login again!

Testing packages

Test dacapo installation (as normal user!).

If you use modules:

module load openmpi
module load campos-dacapo-pseudopotentials
module load python-numeric
module load campos-dacapo-python
module load ScientificPython
module load gnuplot-py
module load RasMol
module load campos-ase2
module load campos-dacapo
ulimit -s 65000 # dacapo needs a large stack

Test with (make sure that /scratch/$USER exists):

cp -r `rpm -ql campos-dacapo-python | grep "share/campos-dacapo-python$"` /tmp
cd /tmp/campos-dacapo-python/Tests
python test.py 2>&1 | tee test.log

It can take up to 1 day. Please consider disabling these "long" tests in test.py:

tests.remove('../Examples/Wannier-ethylene.py')
tests.remove('../Examples/Wannier-Pt4.py')
tests.remove('../Examples/Wannier-Ptwire.py')
tests.remove('../Examples/Wannier-Fe-bcc.py')
tests.remove('../Examples/transport_1dmodel.py')

Note all vtk related tests will fail.

Test gpaw installation (as normal user!):

If you use modules:

module load openmpi
module load campos-ase3
module load campos-gpaw-setups
module load campos-gpaw

Test with:

cp -r `rpm -ql campos-gpaw | grep "share/campos-gpaw/test$"` /tmp/test.gpaw.$$
cd /tmp/test.gpaw.*
python test.py 2>&1 | tee test.log

It takes about 20 minutes.

On "Golden Client"

Login, as root, to the "Golden Client":

ssh n001

Enable nfs mount of the server home directory - follow 'Enable nfs mount on the "Golden Client"' from configuring NFS. After this do:

cd /home/dulak-server/rpm/campos

rpm -ivh campos-dacapo-2*

If getting:

package example_package.el5.i386 is already installed

remove these packages with:

rpm -e --nodeps example_package

to allow the installation to proceed.

Make sure that both python-numeric versions are installed:

rpm -q python-numeric

This command will show a list of packages that need to be installed to fulfill dacapo dependencies. All these packages should be already under /home/dulak-server/rpm. Remember to test the dacapo and gpaw installations on the "Golden Client" too.

If you are installing workstation only, your setup is ready for testing - go to benchmarking and maintenance.

If you are building a cluster go back to installing and configuring systemimager,

Niflheim: Cluster_software_-_RPMS (last edited 2015-07-02 11:28:20 by OleHolmNielsen)