Testing GPAW

Testing of gpaw is done by a nightly test suite consisting of many small and quick tests and by a weekly set of larger test.

“Quick” test suite

Warning

It’s not really quick - it will take almost an hour to run all the tests!

Use pytest and pytest-xdist to run the tests:

$ cd /root/of/gpaw/git/clone/
$ pytest -n <number-of-processes>

Hint

If you don’t have a git-clone from where you can run pytest, but instead want to test an installed version of GPAW, then use:

$ pytest --pyargs=gpaw -n ...

The test suite consists of a large number of small and quick tests found in the gpaw/test/ directory. The tests run nightly in serial and in parallel.

In order to run the tests in parallel, do this:

$ mpiexec -n <number-of-processes> pytest -v

Please report errors to the gpaw-users mailing list so that we can fix them (see Mail List).

Special fixtures and marks

Tests that should only run in serial can be marked like this:

import pytest

@pytest.mark.serial
def test_something():
    ...

There are two special GPAW-fixtures:

gpaw.test.conftest.in_tmp_dir(request, tmp_path_factory)[source]

Run test function in a temporary directory.

gpaw.test.conftest.gpw_files(request, tmp_path_factory)[source]

Reuse gpw-files.

Returns a dict mapping names to paths to gpw-files. If you want to reuse gpw-files from an earlier pytest session then set the $GPW_TEST_FILES environment variable and the files will be written to that folder.

Example:

def test_something(gpw_files):
    calc = GPAW(gpw_files['h2_lcao_wfs'])
    ...

Possible systems are:

  • Bulk BCC-Li with 3x3x3 k-points: bcc_li_pw, bcc_li_fd, bcc_li_lcao.

  • O2 molecule: o2_pw.

  • H2 molecule: h2_pw, h2_fd, h2_lcao.

  • H2 molecule (not centered): h2_pw_0.

  • Spin-polarized H atom: h_pw.

  • Polyethylene chain. One unit, 3 k-points, no symmetry: c2h4_pw_nosym. Three units: c6h12_pw.

  • Bulk TiO2 with 4x4x4 k-points: ti2o4_pw and ti2o4_pw_nosym.

Files with wave functions are also availabel (add _wfs to the names).

Check the conftest.py to see which gpw-files are available. Use a _wfs postfix to get a gpw-file that contains the wave functions.

gpaw.test.findpeak(x: numpy.ndarray, y: numpy.ndarray) Tuple[float, float][source]

Find peak.

>>> x = np.linspace(1, 5, 10)
>>> y = 1 - (x - np.pi)**2
>>> x0, y0 = findpeak(x, y)
>>> f'x0={x0:.6f}, y0={y0:.6f}'
'x0=3.141593, y0=1.000000'

Adding new tests

A test script should fulfill a number of requirements:

  • It should be quick. Preferably not more than a few milliseconds. If the test takes several minutes or more, consider making the test a big test.

  • It should not depend on other scripts.

  • It should be possible to run it on 1, 2, 4, and 8 cores.

A test can produce standard output and files - it doesn’t have to clean up. Just add the in_tmp_dir fixture as an argument:

def test_something(in_tmp_dir):
    # make a mess ...

Here is a parametrized test that uses pytest.approx() for comparing floating point numbers:

import pytest

@pytest.mark.parametrize('x', [1.0, 1.5, 2.0])
def test_sqr(x):
    assert x**2 == pytest.approx(x * x)

Big tests

The directories in gpaw/test/big/ and doc/tutorialsexercises/ contain longer and more realistic tests that we run every weekend. These are submitted to a queueing system of a large computer. The scripts in the doc folder are used both for testing GPAW and for generating up to date figures and csv-file for inclsion in the documentation web-pages.

Adding new tests

To add a new test, create a script somewhere in the file hierarchy ending with agts.py (e.g. submit.agts.py or just agts.py). AGTS is short for Advanced GPAW Test System (or Another Great Time Sink). This script defines how a number of scripts should be submitted to niflheim and how they depend on each other. Consider an example where one script, calculate.py, calculates something and saves a .gpw file and another script, analyse.py, analyses this output. Then the submit script should look something like:

def workflow():
    from myqueue.workflow import run
    with run(script='calculate.py', cores=8, tmax='25m'):
        run(script='analyse.py')  # 1 core and 10 minutes

As shown, this script has to contain the definition of the function workflow. Start the workflow with mq workflow -p agts.py . (see https://myqueue.readthedocs.io/ for more details).

Scripts that generate figures or test files for inclusion in the GPAW web-pages should start with a special # web-page: comment like this:

# web-page: fig1.png, table1.csv
...
# code that creates fig1.png and table1.csv
...

Code coverage

We use the coverage tool to generate a coverage report every night. It is not 100% accurate because it does not include coverage from running our test suite in parallel. Also not included are the Big tests and building this web-page which would add some extra coverage.