Skip to content

Conversation

peterfpeterson
Copy link
Member

@peterfpeterson peterfpeterson commented Apr 3, 2025

The goal for this work is to progress towards the goal of being able to perform a clean build of FrameworkTests and only have the data that is needed downloaded. This does not have the goal of running all tests that are registered in ctest by building FrameworkTests. All of the missing files were already in Data/DocTest or Data/SystemTest.

This will help developers build and run tests with less build units.

There is no associated issue.

Further detail of work

Tests in ctest that don't fully pass (55 individual tests)

  • python.SimpleAPI
  • python.api
  • python.plots
  • python.algoritms
  • python.WorkflowAlgorithms
  • PythonInterfaceCppTest
  • PSISINQTest
  • MantidQtWidgetsCommonTest
  • mantidqt_qt5
  • python.MuonQt5
  • workbench
  • python.scripts
  • python.Diffraction

To test:

In your build tree (adjust the -j argument appropriately

cmake .
ninja clean # clean out everything
ninja FrameworkTests
ctest -j 4 -R KernelTest
ctest -j 4 -R APITest
ctest -j 4 -R CrystalTest
ctest -j 4 -R CurveFittingTest
ctest -j 4 -R AlgorithmsTest
ctest -j 4 -R DataHandlingTest

A general ctest -j 4 will fail because this is focused on tests in Framework and there are several tests registered with ctest that are for the gui.


Reviewer

Please comment on the points listed below (full description).
Your comments will be used as part of the gatekeeper process, so please comment clearly on what you have checked during your review. If changes are made to the PR during the review process then your final comment will be the most important for gatekeepers. In this comment you should make it clear why any earlier review is still valid, or confirm that all requested changes have been addressed.

Code Review

  • Is the code of an acceptable quality?
  • Does the code conform to the coding standards?
  • Are the unit tests small and test the class in isolation?
  • If there is GUI work does it follow the GUI standards?
  • If there are changes in the release notes then do they describe the changes appropriately?
  • Do the release notes conform to the release notes guide?

Functional Tests

  • Do changes function as described? Add comments below that describe the tests performed?
  • Do the changes handle unexpected situations, e.g. bad input?
  • Has the relevant (user and developer) documentation been added/updated?

Does everything look good? Mark the review as Approve. A member of @mantidproject/gatekeepers will take care of it.

Gatekeeper

If you need to request changes to a PR then please add a comment and set the review status to "Request changes". This will stop the PR from showing up in the list for other gatekeepers.

@sf1919
Copy link
Contributor

sf1919 commented Apr 4, 2025

Appears to be genuine test failures

@sf1919
Copy link
Contributor

sf1919 commented May 1, 2025

As this is failing tests and further commit(s) are expected we won't re-run any tests on the new Linux nodes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants