Skip to content

Commit c2c2891

Browse files
test: Consolidate and update pytest options in pyproject.toml (#1773)
* Remove .coveragerc and consolidate pytest options to pyproject.toml. * Apply 'configuring pytest' recommendations for pytest from Scikit-HEP (c.f. https://scikit-hep.org/developer/pytest#configuring-pytest ). - '-ra' includes a report after pytest runs with a summary on all tests except those that passed. From 'pytest --help': > -r chars: show extra test summary info as specified by chars: (f)ailed, > (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, > (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default > (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE'). - '--showlocal' prints locals in tracebacks. - '--strict-markers' will complain if you use an unspecified fixture. - '--strict-config' will raise an error if there is a mistake in the pytest config. - 'log_cli_level = "info"' reports INFO and above log messages on a failure. - 'filterwarnings = ["error"]' sets all warnings to be errors and allows for some warnings to be ignored with -W warn control syntax. (c.f. https://docs.python.org/dev/using/cmdline.html#cmdoption-W ) * Remove `tests/__init__.py` as no reason to make the tests directory importable. * Remove '-r sx' from pytest calls in CI jobs as pyproject.toml now applies '-ra'. * Use 'with pytest.warns' to assert expected warnings in tests. (c.f. https://docs.pytest.org/en/7.0.x/how-to/capture-warnings.html#warns ) * Override error on filterwarnings for the 'minimum supported dependencies' GHA workflow as it is testing for the oldest releases that work with the latest API, not the oldest releases that are warning free. * Remove unused nbQA options for black from pyproject.toml. - Amends PR #1598
1 parent b091985 commit c2c2891

11 files changed

+92
-87
lines changed

.coveragerc

-16
This file was deleted.

.github/workflows/ci.yml

+4-4
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ jobs:
4949

5050
- name: Test with pytest
5151
run: |
52-
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
52+
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
5353
5454
- name: Launch a tmate session if tests fail
5555
if: failure() && github.event_name == 'workflow_dispatch'
@@ -64,7 +64,7 @@ jobs:
6464

6565
- name: Test Contrib module with pytest
6666
run: |
67-
pytest -r sx tests/contrib --mpl --mpl-baseline-path tests/contrib/baseline
67+
pytest tests/contrib --mpl --mpl-baseline-path tests/contrib/baseline
6868
6969
- name: Report contrib coverage with Codecov
7070
if: github.event_name != 'schedule' && matrix.python-version == '3.9' && matrix.os == 'ubuntu-latest'
@@ -75,7 +75,7 @@ jobs:
7575

7676
- name: Test docstring examples with doctest
7777
if: matrix.python-version == '3.9'
78-
run: pytest -r sx src/ README.rst
78+
run: pytest src/ README.rst
7979

8080
- name: Report doctest coverage with Codecov
8181
if: github.event_name != 'schedule' && matrix.python-version == '3.9' && matrix.os == 'ubuntu-latest'
@@ -87,4 +87,4 @@ jobs:
8787
- name: Run benchmarks
8888
if: github.event_name == 'schedule' && matrix.python-version == '3.9'
8989
run: |
90-
pytest -r sx --benchmark-sort=mean tests/benchmarks/test_benchmark.py
90+
pytest --benchmark-sort=mean tests/benchmarks/test_benchmark.py

.github/workflows/dependencies-head.yml

+5-5
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ jobs:
3131
3232
- name: Test with pytest
3333
run: |
34-
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
34+
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
3535
3636
scipy:
3737

@@ -61,7 +61,7 @@ jobs:
6161
6262
- name: Test with pytest
6363
run: |
64-
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
64+
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
6565
6666
iminuit:
6767

@@ -87,7 +87,7 @@ jobs:
8787
python -m pip list
8888
- name: Test with pytest
8989
run: |
90-
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
90+
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
9191
9292
uproot4:
9393

@@ -112,7 +112,7 @@ jobs:
112112
python -m pip list
113113
- name: Test with pytest
114114
run: |
115-
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
115+
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
116116
117117
pytest:
118118

@@ -137,4 +137,4 @@ jobs:
137137
python -m pip list
138138
- name: Test with pytest
139139
run: |
140-
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
140+
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py

.github/workflows/lower-bound-requirements.yml

+5-1
Original file line numberDiff line numberDiff line change
@@ -34,5 +34,9 @@ jobs:
3434

3535
- name: Test with pytest
3636
run: |
37+
# Override the ini option for filterwarnings with an empty list to disable error on filterwarnings
38+
# as testing for oldest releases that work with latest API, not the oldest releases that are warning
39+
# free. Though still show warnings by setting warning control to 'default'.
40+
export PYTHONWARNINGS='default'
3741
# Run on tests/ to skip doctests of src given examples are for latest APIs
38-
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py tests/
42+
pytest --override-ini filterwarnings= --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py tests/

.github/workflows/notebooks.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,4 +27,4 @@ jobs:
2727
python -m pip list
2828
- name: Test example notebooks
2929
run: |
30-
pytest -r sx tests/test_notebooks.py
30+
pytest tests/test_notebooks.py

.github/workflows/release_tests.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ jobs:
4040
4141
- name: Canary test public API
4242
run: |
43-
pytest -r sx tests/test_public_api.py
43+
pytest tests/test_public_api.py
4444
4545
- name: Verify requirements in codemeta.json
4646
run: |

pyproject.toml

+20-10
Original file line numberDiff line numberDiff line change
@@ -42,18 +42,19 @@ ignore = [
4242
minversion = "6.0"
4343
xfail_strict = true
4444
addopts = [
45-
"--ignore=setup.py",
46-
"--ignore=validation/",
47-
"--ignore=binder/",
48-
"--ignore=docs/",
45+
"-ra",
4946
"--cov=pyhf",
50-
"--cov-config=.coveragerc",
47+
"--cov-branch",
48+
"--showlocals",
49+
"--strict-markers",
50+
"--strict-config",
5151
"--cov-report=term-missing",
5252
"--cov-report=xml",
5353
"--cov-report=html",
5454
"--doctest-modules",
55-
"--doctest-glob='*.rst'"
55+
"--doctest-glob='*.rst'",
5656
]
57+
log_cli_level = "info"
5758
testpaths = "tests"
5859
markers = [
5960
"fail_jax",
@@ -75,12 +76,21 @@ markers = [
7576
"skip_pytorch64",
7677
"skip_tensorflow",
7778
]
78-
79-
[tool.nbqa.config]
80-
black = "pyproject.toml"
79+
filterwarnings = [
80+
"error",
81+
'ignore:the imp module is deprecated:DeprecationWarning', # tensorflow
82+
'ignore:distutils Version classes are deprecated:DeprecationWarning', # tensorflow-probability
83+
'ignore:the `interpolation=` argument to percentile was renamed to `method=`, which has additional options:DeprecationWarning', # Issue #1772
84+
"ignore:The interpolation= argument to 'quantile' is deprecated. Use 'method=' instead:DeprecationWarning", # Issue #1772
85+
'ignore: Exception ignored in:pytest.PytestUnraisableExceptionWarning', #FIXME: Exception ignored in: <_io.FileIO [closed]>
86+
'ignore:invalid value encountered in true_divide:RuntimeWarning', #FIXME
87+
'ignore:invalid value encountered in add:RuntimeWarning', #FIXME
88+
"ignore:In future, it will be an error for 'np.bool_' scalars to be interpreted as an index:DeprecationWarning", #FIXME: tests/test_tensor.py::test_pdf_eval[pytorch]
89+
'ignore:Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with:UserWarning', #FIXME: tests/test_optim.py::test_minimize[no_grad-scipy-pytorch-no_stitch]
90+
'ignore:divide by zero encountered in true_divide:RuntimeWarning', #FIXME: pytest tests/test_tensor.py::test_pdf_calculations[numpy]
91+
]
8192

8293
[tool.nbqa.mutate]
83-
black = 1
8494
pyupgrade = 1
8595

8696
[tool.nbqa.addopts]

tests/__init__.py

Whitespace-only changes.

tests/test_export.py

+10-12
Original file line numberDiff line numberDiff line change
@@ -352,17 +352,14 @@ def test_export_sample_zerodata(mocker, spec):
352352
sampledata = [0.0] * len(samplespec['data'])
353353

354354
mocker.patch('pyhf.writexml._ROOT_DATA_FILE')
355-
# make sure no RuntimeWarning, https://stackoverflow.com/a/45671804
356-
with pytest.warns(None) as record:
357-
for modifierspec in samplespec['modifiers']:
358-
pyhf.writexml.build_modifier(
359-
{'measurements': [{'config': {'parameters': []}}]},
360-
modifierspec,
361-
channelname,
362-
samplename,
363-
sampledata,
364-
)
365-
assert not record.list
355+
for modifierspec in samplespec['modifiers']:
356+
pyhf.writexml.build_modifier(
357+
{'measurements': [{'config': {'parameters': []}}]},
358+
modifierspec,
359+
channelname,
360+
samplename,
361+
sampledata,
362+
)
366363

367364

368365
@pytest.mark.parametrize(
@@ -424,7 +421,8 @@ def test_integer_data(datadir, mocker):
424421
"""
425422
Test that a spec with only integer data will be written correctly
426423
"""
427-
spec = json.load(open(datadir.join("workspace_integer_data.json")))
424+
with open(datadir.join("workspace_integer_data.json")) as spec_file:
425+
spec = json.load(spec_file)
428426
channel_spec = spec["channels"][0]
429427
mocker.patch("pyhf.writexml._ROOT_DATA_FILE")
430428

tests/test_optim.py

+8-2
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
from pyhf.tensor.common import _TensorViewer
55
import pytest
66
from scipy.optimize import minimize, OptimizeResult
7+
from scipy.optimize import OptimizeWarning
78
import iminuit
89
import itertools
910
import numpy as np
@@ -563,7 +564,8 @@ def test_solver_options_scipy(mocker):
563564

564565

565566
# Note: in this case, scipy won't usually raise errors for arbitrary options
566-
# so this test exists as a sanity reminder that scipy is not perfect
567+
# so this test exists as a sanity reminder that scipy is not perfect.
568+
# It does raise a scipy.optimize.OptimizeWarning though.
567569
def test_bad_solver_options_scipy(mocker):
568570
optimizer = pyhf.optimize.scipy_optimizer(
569571
solver_options={'arbitrary_option': 'foobar'}
@@ -573,7 +575,11 @@ def test_bad_solver_options_scipy(mocker):
573575

574576
model = pyhf.simplemodels.uncorrelated_background([50.0], [100.0], [10.0])
575577
data = pyhf.tensorlib.astensor([125.0] + model.config.auxdata)
576-
assert pyhf.infer.mle.fit(data, model).tolist()
578+
579+
with pytest.warns(
580+
OptimizeWarning, match="Unknown solver options: arbitrary_option"
581+
):
582+
assert pyhf.infer.mle.fit(data, model).tolist()
577583

578584

579585
def test_minuit_param_names(mocker):

tests/test_tensor.py

+38-35
Original file line numberDiff line numberDiff line change
@@ -274,37 +274,39 @@ def test_shape(backend):
274274
@pytest.mark.fail_pytorch64
275275
def test_pdf_calculations(backend):
276276
tb = pyhf.tensorlib
277-
assert tb.tolist(tb.normal_cdf(tb.astensor([0.8]))) == pytest.approx(
278-
[0.7881446014166034], 1e-07
279-
)
280-
assert tb.tolist(
281-
tb.normal_logpdf(
282-
tb.astensor([0, 0, 1, 1, 0, 0, 1, 1]),
283-
tb.astensor([0, 1, 0, 1, 0, 1, 0, 1]),
284-
tb.astensor([0, 0, 0, 0, 1, 1, 1, 1]),
277+
# FIXME
278+
with pytest.warns(RuntimeWarning, match="divide by zero encountered in log"):
279+
assert tb.tolist(tb.normal_cdf(tb.astensor([0.8]))) == pytest.approx(
280+
[0.7881446014166034], 1e-07
281+
)
282+
assert tb.tolist(
283+
tb.normal_logpdf(
284+
tb.astensor([0, 0, 1, 1, 0, 0, 1, 1]),
285+
tb.astensor([0, 1, 0, 1, 0, 1, 0, 1]),
286+
tb.astensor([0, 0, 0, 0, 1, 1, 1, 1]),
287+
)
288+
) == pytest.approx(
289+
[
290+
np.nan,
291+
np.nan,
292+
np.nan,
293+
np.nan,
294+
-0.91893853,
295+
-1.41893853,
296+
-1.41893853,
297+
-0.91893853,
298+
],
299+
nan_ok=True,
300+
)
301+
# Allow poisson(lambda=0) under limit Poisson(n = 0 | lambda -> 0) = 1
302+
assert tb.tolist(
303+
tb.poisson(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
304+
) == pytest.approx([1.0, 0.3678794503211975, 0.0, 0.3678794503211975])
305+
assert tb.tolist(
306+
tb.poisson_logpdf(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
307+
) == pytest.approx(
308+
np.log([1.0, 0.3678794503211975, 0.0, 0.3678794503211975]).tolist()
285309
)
286-
) == pytest.approx(
287-
[
288-
np.nan,
289-
np.nan,
290-
np.nan,
291-
np.nan,
292-
-0.91893853,
293-
-1.41893853,
294-
-1.41893853,
295-
-0.91893853,
296-
],
297-
nan_ok=True,
298-
)
299-
# Allow poisson(lambda=0) under limit Poisson(n = 0 | lambda -> 0) = 1
300-
assert tb.tolist(
301-
tb.poisson(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
302-
) == pytest.approx([1.0, 0.3678794503211975, 0.0, 0.3678794503211975])
303-
assert tb.tolist(
304-
tb.poisson_logpdf(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
305-
) == pytest.approx(
306-
np.log([1.0, 0.3678794503211975, 0.0, 0.3678794503211975]).tolist()
307-
)
308310

309311
# Ensure continuous approximation is valid
310312
assert tb.tolist(
@@ -343,11 +345,12 @@ def test_pdf_calculations_pytorch(backend):
343345
assert tb.tolist(
344346
tb.poisson(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
345347
) == pytest.approx([1.0, 0.3678794503211975, 0.0, 0.3678794503211975])
346-
assert tb.tolist(
347-
tb.poisson_logpdf(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
348-
) == pytest.approx(
349-
np.log([1.0, 0.3678794503211975, 0.0, 0.3678794503211975]).tolist()
350-
)
348+
with pytest.warns(RuntimeWarning, match="divide by zero encountered in log"):
349+
assert tb.tolist(
350+
tb.poisson_logpdf(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
351+
) == pytest.approx(
352+
np.log([1.0, 0.3678794503211975, 0.0, 0.3678794503211975]).tolist()
353+
)
351354

352355
# Ensure continuous approximation is valid
353356
assert tb.tolist(

0 commit comments

Comments
 (0)