Testing GPAW

Testing of gpaw is done by a nightly test suite consisting of many small and quick tests (with pytest) and by a weekly set of larger tests.

Test suite with pytest

The test suite consists of a large number of small and quick tests found in the gpaw/test/ directory. The tests run nightly in serial and in parallel modes.

Running tests in serial mode

Use pytest to run the tests:

$ pytest --pyargs gpaw -v

To speed up the test suite, use pytest-xdist to use multiple processes to run multiple tests at the same time (note: each test is still run in serial mode):

$ pytest --pyargs gpaw -v -n <number-of-processes>

Please report errors to the gpaw-users mailing list so that we can fix them (see Mail List).

Running tests in parallel mode

In order to run the tests with MPI parallelization, do this:

$ mpiexec -n <number-of-processes> pytest --pyargs gpaw -v

The tests should pass with 1, 2, 4, and 8 parallel tasks.

Hint

If you observe issues (e.g. segmentation faults) when trying to run pytest, try this instead:

$ mpiexec -n <n> gpaw python -m pytest --pyargs gpaw -v

This should ensure that the correct environment is used.

Please report also parallel errors to the mailing list so that we can fix them (see Mail List).

Running a subset of tests

There are multiple options for running only a subset of test.

  1. Use markers to run tests with that mark, for example CI tests:

    $ pytest --pyargs gpaw -v -m ci
    
  2. Use module path to run tests in that path:

    $ pytest --pyargs gpaw.test.lcao -v
    
  3. Use file/directory path to run tests in that path:

    $ pytest /root/of/gpaw/git/clone/gpaw/test/lcao
    

Special fixtures and marks

Tests that should only run in serial can be marked like this:

import pytest

@pytest.mark.serial
def test_something():
    ...

There are two special GPAW-fixtures:

gpaw.test.conftest.in_tmp_dir(request, tmp_path_factory)[source]

Run test function in a temporary directory.

gpaw.test.conftest.add_cwd_to_setup_paths()[source]

Temporarily add current working directory to setup_paths.

gpaw.test.conftest.gpw_files(request)[source]

Reuse gpw-files.

Returns a dict mapping names to paths to gpw-files. The files are written to the pytest cache and can be cleared using pytest –cache-clear.

Example:

def test_something(gpw_files):
    calc = GPAW(gpw_files['h2_lcao'])
    ...

Possible systems are:

  • Bulk BCC-Li with 3x3x3 k-points: bcc_li_pw, bcc_li_fd, bcc_li_lcao.

  • O2 molecule: o2_pw.

  • H2 molecule: h2_pw, h2_fd, h2_lcao.

  • H2 molecule (not centered): h2_pw_0.

  • N2 molecule n2_pw

  • N molecule n_pw

  • Spin-polarized H atom: h_pw.

  • Polyethylene chain. One unit, 3 k-points, no symmetry: c2h4_pw_nosym. Three units: c6h12_pw.

  • Bulk BN (zinkblende) with 2x2x2 k-points and 9 converged bands: bn_pw.

  • h-BN layer with 3x3x1 (gamma center) k-points and 26 converged bands: hbn_pw.

  • Graphene with 6x6x1 k-points: graphene_pw

  • I2Sb2 (Z2 topological insulator) with 6x6x1 k-points and no symmetries: i2sb2_pw_nosym

  • MoS2 with 6x6x1 k-points: mos2_pw and mos2_pw_nosym

  • MoS2 with 5x5x1 k-points: mos2_5x5_pw

  • NiCl2 with 6x6x1 k-points: nicl2_pw and nicl2_pw_evac

  • V2Br4 (AFM monolayer), LDA, 4x2x1 k-points, 28(+1) converged bands: v2br4_pw and v2br4_pw_nosym

  • Bulk Si, LDA, 2x2x2 k-points (gamma centered): si_pw

  • Bulk Si, LDA, 4x4x4 k-points, 8(+1) converged bands: fancy_si_pw and fancy_si_pw_nosym

  • Bulk SiC, LDA, 4x4x4 k-points, 8(+1) converged bands: sic_pw and sic_pw_spinpol

  • Bulk Fe, LDA, 4x4x4 k-points, 9(+1) converged bands: fe_pw and fe_pw_nosym

  • Bulk C, LDA, 2x2x2 k-points (gamma centered), c_pw

  • Bulk Co (HCP), 4x4x4 k-points, 12(+1) converged bands: co_pw and co_pw_nosym

  • Bulk SrVO3 (SC), 3x3x3 k-points, 20(+1) converged bands: srvo3_pw and srvo3_pw_nosym

  • Bulk Al, LDA, 4x4x4 k-points, 10(+1) converged bands: al_pw and al_pw_nosym

  • Bulk Al, LDA, 4x4x4 k-points, 4 converged bands: bse_al

  • Bulk Ag, LDA, 2x2x2 k-points, 6 converged bands, 2eV U on d-band: ag_pw

  • Bulk GaAs, LDA, 4x4x4 k-points, 8(+1) bands converged: gaas_pw and gaas_pw_nosym

  • Bulk P4, LDA, 4x4 k-points, 40 bands converged: p4_pw

  • Distorted bulk Fe, revTPSS: fe_pw_distorted

  • Distorted bulk Si, TPSS: si_pw_distorted

Files always include wave functions.

Check the conftest.py to see which gpw-files are available. Use a _wfs post-fix to get a gpw-file that contains the wave functions.

gpaw.test.findpeak(x, y)[source]

Find peak.

>>> x = np.linspace(1, 5, 10)
>>> y = 1 - (x - np.pi)**2
>>> x0, y0 = findpeak(x, y)
>>> f'x0={x0:.6f}, y0={y0:.6f}'
'x0=3.141593, y0=1.000000'

Adding new tests

A test script should fulfill a number of requirements:

  • It should be quick. Preferably not more than a few milliseconds. If the test takes several minutes or more, consider making the test a big test.

  • It should not depend on other scripts.

  • It should be possible to run it on 1, 2, 4, and 8 cores.

A test can produce standard output and files - it doesn’t have to clean up. Just add the in_tmp_dir fixture as an argument:

def test_something(in_tmp_dir):
    # make a mess ...

Here is a parametrized test that uses pytest.approx() for comparing floating point numbers:

import pytest

@pytest.mark.parametrize('x', [1.0, 1.5, 2.0])
def test_sqr(x):
    assert x**2 == pytest.approx(x * x)

Big tests

The directories in gpaw/test/big/ and doc/tutorialsexercises/ contain longer and more realistic tests that we run every weekend. These are submitted to a queuing system of a large computer. The scripts in the doc folder are used both for testing GPAW and for generating up to date figures and CSV-file for inclusion in the documentation web-pages.

Adding new tests

To add a new test, create a script somewhere in the file hierarchy ending with agts.py (e.g. submit.agts.py or just agts.py). AGTS is short for Advanced GPAW Test System (or Another Great Time Sink). This script defines how a number of scripts should be submitted to Niflheim and how they depend on each other. Consider an example where one script, calculate.py, calculates something and saves a .gpw file and another script, analyse.py, analyses this output. Then the submit script should look something like:

def workflow():
    from myqueue.workflow import run
    with run(script='calculate.py', cores=8, tmax='25m'):
        run(script='analyse.py')  # 1 core and 10 minutes

As shown, this script has to contain the definition of the function workflow. Start the workflow with mq workflow -p agts.py . (see https://myqueue.readthedocs.io/ for more details).

Scripts that generate figures or test files for inclusion in the GPAW web-pages should start with a special # web-page: comment like this:

# web-page: fig1.png, table1.csv
...
# code that creates fig1.png and table1.csv
...

Code coverage

We use the coverage tool to generate a coverage report every night. It is not 100% accurate because it does not include coverage from running our test suite in parallel. Also not included are the Big tests and building this web-page which would add some extra coverage.