Skip to content

Scenario Failure: max_ndvi_composite #155

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
github-actions bot opened this issue Apr 19, 2025 · 0 comments
Open

Scenario Failure: max_ndvi_composite #155

github-actions bot opened this issue Apr 19, 2025 · 0 comments

Comments

@github-actions
Copy link

Benchmark Failure: max_ndvi_composite

Scenario ID: max_ndvi_composite
Backend System: openeofed.dataspace.copernicus.eu
Failure Count: 1
Timestamp: 2025-04-19 02:14:52

Links:


Contact Information

Point of Contact:

Name Organization Contact
Jeroen Dries VITO Contact via VITO (VITO Website, GitHub)

Process Graph

{
  "maxndvi1": {
    "process_id": "max_ndvi_composite",
    "namespace": "https://raw.githubusercontent.com/ESA-APEx/apex_algorithms/refs/heads/main/algorithm_catalog/vito/max_ndvi_composite/openeo_udp/max_ndvi_composite.json",
    "arguments": {
      "spatial_extent": {
        "west": 5.07,
        "east": 5.09,
        "south": 51.21,
        "north": 51.23
      },
      "temporal_extent": [
        "2023-08-01",
        "2023-09-30"
      ],
      "bands": [
        "B04"
      ]
    },
    "result": true
  }
}

Error Logs

scenario = BenchmarkScenario(id='max_ndvi_composite', description='max_ndvi example', backend='openeofed.dataspace.copernicus.eu'...14419841218!tests_test_benchmarks.py__test_run_benchmark_max_ndvi_composite_!actual/openEO.tif'}, reference_options={})
connection_factory = <function connection_factory.<locals>.get_connection at 0x7facf168b920>
tmp_path = PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0')
track_metric = <function track_metric.<locals>.append at 0x7facf168bba0>
upload_assets_on_fail = <function upload_assets_on_fail.<locals>.collect at 0x7facf168b880>
request = <FixtureRequest for <Function test_run_benchmark[max_ndvi_composite]>>

    @pytest.mark.parametrize(
        "scenario",
        [
            # Use scenario id as parameterization id to give nicer test names.
            pytest.param(uc, id=uc.id)
            for uc in get_benchmark_scenarios()
        ],
    )
    def test_run_benchmark(
        scenario: BenchmarkScenario,
        connection_factory,
        tmp_path: Path,
        track_metric,
        upload_assets_on_fail,
        request
    ):
        track_metric("scenario_id", scenario.id)
        # Check if a backend override has been provided via cli options.
        override_backend = request.config.getoption("--override-backend")
        backend = scenario.backend
        if override_backend:
            _log.info(f"Overriding backend URL with {override_backend!r}")
            backend = override_backend
    
        connection: openeo.Connection = connection_factory(url=backend)
    
        # TODO #14 scenario option to use synchronous instead of batch job mode?
        job = connection.create_job(
            process_graph=scenario.process_graph,
            title=f"APEx benchmark {scenario.id}",
            additional=scenario.job_options,
        )
        track_metric("job_id", job.job_id)
    
        # TODO: monitor timing and progress
        # TODO: abort excessively long batch jobs? https://github.com/Open-EO/openeo-python-client/issues/589
        job.start_and_wait()
    
        collect_metrics_from_job_metadata(job, track_metric=track_metric)
    
        results = job.get_results()
        collect_metrics_from_results_metadata(results, track_metric=track_metric)
    
        # Download actual results
        actual_dir = tmp_path / "actual"
        paths = results.download_files(target=actual_dir, include_stac_metadata=True)
        # Upload assets on failure
        upload_assets_on_fail(*paths)
    
        # Compare actual results with reference data
        reference_dir = download_reference_data(
            scenario=scenario, reference_dir=tmp_path / "reference"
        )
    
>       assert_job_results_allclose(
            actual=actual_dir,
            expected=reference_dir,
            tmp_path=tmp_path,
            rtol=scenario.reference_options.get("rtol", 1e-6),
            atol=scenario.reference_options.get("atol", 1e-6),
        )

tests/test_benchmarks.py:74: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

actual = PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/actual')
expected = PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/reference')

    def assert_job_results_allclose(
        actual: Union[BatchJob, JobResults, str, Path],
        expected: Union[BatchJob, JobResults, str, Path],
        *,
        rtol: float = _DEFAULT_RTOL,
        atol: float = _DEFAULT_ATOL,
        tmp_path: Optional[Path] = None,
    ):
        """
        Assert that two job results sets are equal (with tolerance).
    
        :param actual: actual job results, provided as :py:class:`~openeo.rest.job.BatchJob` object,
            :py:meth:`~openeo.rest.job.JobResults` object or path to directory with downloaded assets.
        :param expected: expected job results, provided as :py:class:`~openeo.rest.job.BatchJob` object,
            :py:meth:`~openeo.rest.job.JobResults` object or path to directory with downloaded assets.
        :param rtol: relative tolerance
        :param atol: absolute tolerance
        :param tmp_path: root temp path to download results if needed.
            It's recommended to pass pytest's `tmp_path` fixture here
        :raises AssertionError: if not equal within the given tolerance
    
        .. versionadded:: 0.31.0
    
        .. warning::
            This function is experimental and subject to change.
        """
        issues = _compare_job_results(actual, expected, rtol=rtol, atol=atol, tmp_path=tmp_path)
        if issues:
>           raise AssertionError("\n".join(issues))
E           AssertionError: Issues for metadata file 'job-results.json':
E           Differing 'derived_from' links (11 common, 2 only in actual, 2 only in expected):
E             only in actual: {'/eodata/Sentinel-2/MSI/L2A_N0500/2023/08/20/S2A_MSIL2A_20230820T103631_N0510_R008_T31UFS_20241023T045441.SAFE', '/eodata/Sentinel-2/MSI/L2A_N0500/2023/08/18/S2B_MSIL2A_20230818T104629_N0510_R051_T31UFS_20241024T233252.SAFE'}
E             only in expected: {'/eodata/Sentinel-2/MSI/L2A/2023/08/20/S2A_MSIL2A_20230820T103631_N0509_R008_T31UFS_20230820T170259.SAFE', '/eodata/Sentinel-2/MSI/L2A/2023/08/18/S2B_MSIL2A_20230818T104629_N0509_R051_T31UFS_20230818T140646.SAFE'}.
E           Issues for file 'openEO.tif':
E           Left and right DataArray objects are not close
E           Differing values:
E           L
E               array([[[ 885, 1118, ...,  388,  295],
E                       [2238, 2752, ...,  457,  229],
E                       ...,
E                       [ 187,  243, ...,  235,  283],
E                       [ 210,  212, ...,  221,  301]]], shape=(1, 227, 147), dtype=int16)
E           R
E               array([[[ 885, 1118, ...,  388,  295],
E                       [2238, 2752, ...,  457,  229],
E                       ...,
E                       [ 187,  243, ...,  235,  283],
E                       [ 210,  212, ...,  221,  301]]], shape=(1, 227, 147), dtype=int16)

/opt/hostedtoolcache/Python/3.12.10/x64/lib/python3.12/site-packages/openeo/testing/results.py:386: AssertionError
----------------------------- Captured stdout call -----------------------------
0:00:00 Job 'cdse-j-2504190210524cc6949356a4f203502e': send 'start'
0:00:13 Job 'cdse-j-2504190210524cc6949356a4f203502e': queued (progress 0%)
0:00:19 Job 'cdse-j-2504190210524cc6949356a4f203502e': queued (progress 0%)
0:00:25 Job 'cdse-j-2504190210524cc6949356a4f203502e': queued (progress 0%)
0:00:34 Job 'cdse-j-2504190210524cc6949356a4f203502e': queued (progress 0%)
0:00:44 Job 'cdse-j-2504190210524cc6949356a4f203502e': queued (progress 0%)
0:00:56 Job 'cdse-j-2504190210524cc6949356a4f203502e': queued (progress 0%)
0:01:12 Job 'cdse-j-2504190210524cc6949356a4f203502e': queued (progress 0%)
0:01:31 Job 'cdse-j-2504190210524cc6949356a4f203502e': running (progress N/A)
0:01:55 Job 'cdse-j-2504190210524cc6949356a4f203502e': running (progress N/A)
0:02:26 Job 'cdse-j-2504190210524cc6949356a4f203502e': running (progress N/A)
0:03:04 Job 'cdse-j-2504190210524cc6949356a4f203502e': running (progress N/A)
0:03:51 Job 'cdse-j-2504190210524cc6949356a4f203502e': finished (progress 100%)
------------------------------ Captured log call -------------------------------
INFO     conftest:conftest.py:125 Connecting to 'openeofed.dataspace.copernicus.eu'
INFO     openeo.config:config.py:193 Loaded openEO client config from sources: []
INFO     conftest:conftest.py:138 Checking for auth_env_var='OPENEO_AUTH_CLIENT_CREDENTIALS_CDSEFED' to drive auth against url='openeofed.dataspace.copernicus.eu'.
INFO     conftest:conftest.py:142 Extracted provider_id='CDSE' client_id='openeo-apex-benchmarks-service-account' from auth_env_var='OPENEO_AUTH_CLIENT_CREDENTIALS_CDSEFED'
INFO     openeo.rest.connection:connection.py:232 Found OIDC providers: ['CDSE']
INFO     openeo.rest.auth.oidc:oidc.py:404 Doing 'client_credentials' token request 'https://identity.dataspace.copernicus.eu/auth/realms/CDSE/protocol/openid-connect/token' with post data fields ['grant_type', 'client_id', 'client_secret', 'scope'] (client_id 'openeo-apex-benchmarks-service-account')
INFO     openeo.rest.connection:connection.py:329 Obtained tokens: ['access_token', 'id_token']
INFO     openeo.rest.job:job.py:404 Downloading Job result asset 'openEO.tif' from https://openeo.dataspace.copernicus.eu/openeo/1.1/jobs/j-2504190210524cc6949356a4f203502e/results/assets/NmE3N2ZjZDEtOWMwOC00NmU5LWI4NzUtNTRmYjk5OWFiMjAw/367a193b930a45ba284ea2ef7deccb2f/openEO.tif?expires=1745633685 to /home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/actual/openEO.tif
INFO     apex_algorithm_qa_tools.scenarios:util.py:341 Downloading reference data for scenario.id='max_ndvi_composite' to reference_dir=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/reference'): start 2025-04-19 02:14:46.479403
INFO     apex_algorithm_qa_tools.scenarios:util.py:341 Downloading source='https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-14419841218!tests_test_benchmarks.py__test_run_benchmark_max_ndvi_composite_!actual/job-results.json' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/reference/job-results.json'): start 2025-04-19 02:14:46.479708
INFO     apex_algorithm_qa_tools.scenarios:util.py:347 Downloading source='https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-14419841218!tests_test_benchmarks.py__test_run_benchmark_max_ndvi_composite_!actual/job-results.json' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/reference/job-results.json'): end 2025-04-19 02:14:47.161222, elapsed 0:00:00.681514
INFO     apex_algorithm_qa_tools.scenarios:util.py:341 Downloading source='https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-14419841218!tests_test_benchmarks.py__test_run_benchmark_max_ndvi_composite_!actual/openEO.tif' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/reference/openEO.tif'): start 2025-04-19 02:14:47.161535
INFO     apex_algorithm_qa_tools.scenarios:util.py:347 Downloading source='https://s3.waw3-1.cloudferro.com/apex-benchmarks/gh-14419841218!tests_test_benchmarks.py__test_run_benchmark_max_ndvi_composite_!actual/openEO.tif' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/reference/openEO.tif'): end 2025-04-19 02:14:47.775113, elapsed 0:00:00.613578
INFO     apex_algorithm_qa_tools.scenarios:util.py:347 Downloading reference data for scenario.id='max_ndvi_composite' to reference_dir=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/reference'): end 2025-04-19 02:14:47.775339, elapsed 0:00:01.295936
INFO     openeo.testing.results:results.py:298 Comparing job results: PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/actual') vs PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_max_ndvi_co0/reference')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

0 participants