This page describes the necessary steps for installing of dl160
On the server
Install external packages
As root:
create yum repository definitions (do not enable them):
# atrpms echo '[atrpms]' > /etc/yum.repos.d/atrpms.repo echo 'name=name=CentOS $releasever - $basearch - ATrpms' >> /etc/yum.repos.d/atrpms.repo echo 'baseurl=http://dl.atrpms.net/el$releasever-$basearch/atrpms/stable' >> /etc/yum.repos.d/atrpms.repo echo '#baseurl=http://mirrors.ircam.fr/pub/atrpms/el$releasever-$basearch/atrpms/stable' >> /etc/yum.repos.d/atrpms.repo echo 'gpgkey=http://ATrpms.net/RPM-GPG-KEY.atrpms' >> /etc/yum.repos.d/atrpms.repo echo 'gpgcheck=1' >> /etc/yum.repos.d/atrpms.repo echo 'enabled=0' >> /etc/yum.repos.d/atrpms.repo # epel echo '[epel]' > /etc/yum.repos.d/epel.repo echo 'name=name=CentOS $releasever - $basearch - EPEL' >> /etc/yum.repos.d/epel.repo echo 'baseurl=http://download.fedora.redhat.com/pub/epel/$releasever/$basearch' >> /etc/yum.repos.d/epel.repo echo 'gpgkey=http://download.fedora.redhat.com/pub/epel/RPM-GPG-KEY-EPEL' >> /etc/yum.repos.d/epel.repo echo 'gpgcheck=1' >> /etc/yum.repos.d/epel.repo echo 'enabled=0' >> /etc/yum.repos.d/epel.repo
install, as root:
yum install yum-utils # /var directories must be created yum search --enablerepo=atrpms arpack-devel yum search --enablerepo=epel jmol
configure rpmbuild:
use the following ~rpmbuild/.rpmmacros:
%disttag el5.fys %packager rpmbuild@fysik.dtu.dk %distribution Fysik RPMS %vendor Fysik RPMS <rpm@fysik.dtu.dk> %_signature gpg %_gpg_path ~/.gnupg %_gpg_name Fysik RPMS #%_topdir /home/camp/rpmbuild/AMD-Opteron %_topdir /home/camp/rpmbuild/Intel-Nehalem %_rpmdir %{_topdir}/RPMS %_srcrpmdir %{_topdir}/SRPMS %_svndir /home/camp/rpmbuild/rpmbuild %_specdir %{_svndir}/SPECS %_sourcedir %{_svndir}/SOURCES %_rpmfilename %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm #%_tmppath %{_topdir} %_tmppath /tmp/rpmbuild %_builddir %{_tmppath}/BUILD %niflheim 1
as rpmbuild create directories:
mkdir -p ~/Intel-Nehalem/RPMS mkdir -p ~/Intel-Nehalem/SRPMS mkdir -p ~/Intel-Nehalem/BUILD mkdir -p ~/Intel-Nehalem/SPECS # needed only by openmpi mkdir -p ~/Intel-Nehalem/SOURCES # needed only by openmpi mkdir -p /tmp/rpmbuild/BUILD
install official packages, as rpmbuild:
cd ~/Intel-Nehalem/RPMS yumdownloader --resolve gcc-gfortran gcc43-c++ gcc43-gfortran blas-devel lapack-devel python-devel yumdownloader --resolve gnuplot libXi-devel xorg-x11-fonts-100dpi pexpect tetex-latex tkinter qt-devel yumdownloader --resolve openmpi openmpi-devel openmpi-libs compat-dapl libibverbs librdmacm openib yum localinstall * # as root
install atrpms packages, as rpmbuild (vtk-python is currently unavailable 16 Apr 2009):
~/Intel-Nehalem/RPMS yumdownloader --resolve --enablerepo=atrpms vtk-python arpack-devel graphviz wget http://ATrpms.net/RPM-GPG-KEY.atrpms rpm --import RPM-GPG-KEY.atrpms # as root yum localinstall * # as root
install the packages from epel, as rpmbuild:
~/Intel-Nehalem/RPMS yumdownloader --resolve --enablerepo=epel jmol yumdownloader --resolve --enablerepo=epel environment-modules suitesparse-devel wget http://download.fedora.redhat.com/pub/epel/RPM-GPG-KEY-EPEL rpm --import RPM-GPG-KEY-EPEL # as root yum localinstall * # as root source /etc/profile.d/modules.sh
remove default openmpi:
yum remove openmpi openmpi-libs
edit /etc/yum.conf so it contains:
exclude=netcdf-* netcdf3-* fftw-* fftw2-* fftw3-* python-numeric openmpi-*
It's time to build custom RPMS
As rpmbuild:
cd ~/rpmbuild/SPECS
Build a custom openmpi, using torque support:
export rpmtopdir=${HOME}/Intel-Nehalem # set this to _topdir value from ~/.rpmmacros wget http://www.open-mpi.org/software/ompi/v1.3/downloads/openmpi-1.3.2.tar.bz2 \ -O ~/rpmbuild/SOURCES/openmpi-1.3.2.tar.bz2 sh ./buildrpm-1.3.2-1.gfortran.sh ../SOURCES/openmpi-1.3.2.tar.bz2 2>&1 | tee buildrpm-1.3.2-1.gfortran.sh.log.Intel-Nehalem sh ./buildrpm-1.3.2-1.gfortran43.sh ../SOURCES/openmpi-1.3.2.tar.bz2 2>&1 | tee buildrpm-1.3.2-1.gfortran43.sh.log.Intel-Nehalem sh ./buildrpm-1.3.2-1.pathscale.sh ../SOURCES/openmpi-1.3.2.tar.bz2 2>&1 | tee buildrpm-1.3.2-1.pathscale.sh.log.Intel-Nehalem rpm -ivh ~/RPMS/*/openmpi-*.rpm
If scripts that contain ALL build/install/uninstall commands (global_install.sh and global_uninstall.sh) need to be created, every time after an RPM is successfully built, do:
grep -v "#\!" install.sh >> ~/Intel-Nehalem/global_install.sh cat uninstall.sh ~/Intel-Nehalem/global_uninstall.sh | grep -v "#\!" >> ~/Intel-Nehalem/global_uninstall.sh.tmp && mv -f ~/Intel-Nehalem/global_uninstall.sh.tmp ~/Intel-Nehalem/global_uninstall.sh # ignore "cat: .../global_uninstall.sh: No such ..." error when running first time
Note that global_uninstall.sh won't remove built RPM files, just will uninstall the packages.
Build the following for dacapo:
set the disttag variable for convenience:
export disttag="el5.fys"
install icc/ifort compilers,
acml:
rpmbuild -bb --with compiler=gfortran --with version1=0 --with version2=1 --with modules --with default_version acml.spec rpmbuild -bb --with compiler=pathscale --with version1=0 --with version2=1 --with modules --with default_version acml.spec rpmbuild -bb --with compiler=gfortran43 --with version1=1 --with version2=0 --with modules --with default_version acml.spec rpmbuild -bb --with compiler=pathscale --with version1=1 --with version2=0 --with modules --with default_version acml.spec rpmbuild -bb --with compiler=gfortran43 --with version1=2 --with version2=0 --with modules --with default_version acml.spec rpmbuild -bb --with compiler=pathscale --with version1=2 --with version2=0 --with modules --with default_version acml.spec rpmbuild -bb --with compiler=ifort --with version1=2 --with version2=0 --with modules --with default_version acml.spec
goto:
rpmbuild --bb --with compiler=gfortran --with modules=1 --with default_version=1 \ --with prefix=/opt/goto/1.26/1.${disttag}.gfortran.smp goto.spec rpmbuild --bb --with compiler=gfortran43 --with modules=1 --with default_version=1 \ --with prefix=/opt/goto/1.26/1.${disttag}.gfortran43.smp goto.spec rpmbuild --bb --with compiler=pathscale --with compiler_bindir=/opt/pathscale/bin --with compiler_libdir=/opt/pathscale/lib/3.2 \ --with modules=1 --with default_version=1 --with prefix=/opt/goto/1.26/1.${disttag}.pathscale.smp goto.specNote - 1.26 version fails on Nehalem with:
../../../param.h:1195:21: error: division by zero in #if
campos-dacapo-pseudopotentials:
rpmbuild -bb --with modules --with default_version campos-dacapo-pseudopotentials.spec
-
rpmbuild -bb --with modules --with default_version --with prefix=/opt/RasMol/2.7.3/3.${disttag} RasMol.spec
-
python campos_installer.py --machine='dulak-cluster' --create_scripts cblas cp ~/RPMS/*/cblas-*.rpm /home/dulak-server/rpm/campos
numpy:
rpmbuild --bb --with cblas_prefix=/opt/acml/4.0.1/gfortran64/lib \ --with blas=acml --with blas_version=4.0.1 --with blasdir=/opt/acml/4.0.1/gfortran64/lib \ --with lapack=acml --with lapackdir=/opt/acml/4.0.1/gfortran64/lib --with modules=1 --with default_version=1 \ --with prefix=/opt/numpy/1.3.0/1.${disttag}.gfortran.python2.4.acml.4.0.1.acml numpy.spec rpmbuild --bb --with cblas_prefix=/usr/lib64 --with blas_version=3.0-37.el5 \ --with modules=1 --with default_version=1 \ --with prefix=/opt/numpy/1.3.0/1.${disttag}.gfortran.python2.4.blas.3.0-37.el5.lapack numpy.spec
-
rpmbuild -bb --with modules --with default_version --with prefix=/opt/gnuplot-py/1.8.1/1.${disttag}.python2.4 RasMol.spec module load gnuplot-py
python-numeric (we must install 24.2 version, and we keep the default version):
cd rpm -e --nodeps python-numeric yumdownloader --resolve --disableexcludes=main python-numeric cp python-numeric-*.rpm /home/dulak-server/rpm/external # **Skip this step if not installing on "dulak-server"** cd ~/rpmbuild/SPECS python campos_installer.py --machine='dulak-cluster' --create_scripts python-numeric cp ~/RPMS/*/python-numeric-*.rpm /home/dulak-server/rpm/campos
Note: (16 Apr 2009) currently Numeric's test.py results in (we ignore this error):
glibc detected *** python: free(): invalid next size (normal): 0x09aee970 ***
If you use modules:
module load python-numeric echo "module load python-numeric" >> ~/global_install.sh
otherwise logout and login again!
After installing python-numeric make a very rough check:
python -c "import lapack_lite" ldd `rpm -ql python-numeric | grep lapack_lite.so` ldd `rpm -ql python-numeric | grep _dotblas.so`
and reinstall the default version:
rpm -ivh --oldpackage ~/python-numeric-*.rpm
-
python campos_installer.py --machine='dulak-cluster' --create_scripts ScientificPython cp ~/RPMS/*/ScientificPython-*.rpm /home/dulak-server/rpm/campos
-
python campos_installer.py --machine='dulak-cluster' --create_scripts campos-ase2 cp ~/RPMS/*/campos-ase2-*.rpm /home/dulak-server/rpm/campos
-
python campos_installer.py --machine='dulak-cluster' --create_scripts campos-dacapo-python
-
python campos_installer.py --machine='dulak-cluster' --create_scripts --compiler=gfortran43 campos-dacapo cp ~/RPMS/*/campos-dacapo-*.rpm /home/dulak-server/rpm/campos
logout and login again!
build following for gpaw:
-
python campos_installer.py --machine='dulak-cluster' --create_scripts campos-gpaw-setups
-
python campos_installer.py --machine='dulak-cluster' --create_scripts campos-ase3 cp ~/RPMS/*/campos-ase3-*.rpm /home/dulak-server/rpm/campos
-
python campos_installer.py --machine='dulak-cluster' --create_scripts --compiler=gfortran43 campos-gpaw cp ~/RPMS/*/campos-gpaw-*.rpm /home/dulak-server/rpm/campos
logout and login again!
Testing packages
Test dacapo installation (as normal user!).
If you use modules:
module load openmpi module load campos-dacapo-pseudopotentials module load python-numeric module load campos-dacapo-python module load ScientificPython module load gnuplot-py module load RasMol module load campos-ase2 module load campos-dacapo ulimit -s 65000 # dacapo needs a large stack
Test with (make sure that /scratch/$USER exists):
cp -r `rpm -ql campos-dacapo-python | grep "share/campos-dacapo-python$"` /tmp cd /tmp/campos-dacapo-python/Tests python test.py 2>&1 | tee test.log
It can take up to 1 day. Please consider disabling these "long" tests in test.py:
tests.remove('../Examples/Wannier-ethylene.py') tests.remove('../Examples/Wannier-Pt4.py') tests.remove('../Examples/Wannier-Ptwire.py') tests.remove('../Examples/Wannier-Fe-bcc.py') tests.remove('../Examples/transport_1dmodel.py')
Note all vtk related tests will fail.
Test gpaw installation (as normal user!):
If you use modules:
module load openmpi module load campos-ase3 module load campos-gpaw-setups module load campos-gpaw
Test with:
cp -r `rpm -ql campos-gpaw | grep "share/campos-gpaw/test$"` /tmp/test.gpaw.$$ cd /tmp/test.gpaw.* python test.py 2>&1 | tee test.log
It takes about 20 minutes.
On "Golden Client"
Login, as root, to the "Golden Client":
ssh n001
Enable nfs mount of the server home directory - follow 'Enable nfs mount on the "Golden Client"' from configuring NFS. After this do:
cd /home/dulak-server/rpm/campos rpm -ivh campos-dacapo-2*
If getting:
package example_package.el5.i386 is already installed
remove these packages with:
rpm -e --nodeps example_package
to allow the installation to proceed.
Make sure that both python-numeric versions are installed:
rpm -q python-numeric
This command will show a list of packages that need to be installed to fulfill dacapo dependencies. All these packages should be already under /home/dulak-server/rpm. Remember to test the dacapo and gpaw installations on the "Golden Client" too.
If you are installing workstation only, your setup is ready for testing - go to benchmarking and maintenance.
If you are building a cluster go back to installing and configuring systemimager,