-
Notifications
You must be signed in to change notification settings - Fork 128
Restore reverted change: Abins refactor replacing SData classes with Euphonic features #39669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
To begin migration from SData to euphonic data types, allow existing SData to produce a Spectrum1DCollection. Then we can start modifying code that _uses_ SData to use this datatype instead. So far we modify the _atom_number_s method which creates output workspaces from the Abins calculation. Spectrum2DCollection is not implemented yet, so initially we leave the old implementation in place for 2D cases.
- Although Spectrum1DCollection provides a .select() function, we ended up using a chain of filters here because it can't handle numerical comparisons. Perhaps this would be a bit cleaner if the select() method accepted our filter functions. - The Spectrum1DCollection group_by() method throws an error if the key is in the top level of metadata (i.e. because it applies to all rows.) It would be good to clean that upstream too.
This is not very thoroughly tested here, and should be moved upstream once everything looks happy on Abins side. - Wrap Spectrum1DCollection to deal with group_by() issue where metadata is the same for all items (and so not in line_data) - Implement Spectrum2DCollection - Have SData.get_spectrum_collection() dispatch to 1D or 2D as appropriate - Tweak workspace writers to use either SpectrumNCollection class
As the relevant information is available in spectra, this allows us to drop a redundant parameter and reduce the number of objects passed around
- Now the Algorithm classes never deal with SData at all - SData still used internally and cached
Move the threshold-check logic to module-level functions, and leave existing method as a wrapper. Then we should be able to delete the class later... Also remove some useless parameters/logic that make the return value optional. We can just ignore the return value if we don't care about it.
Bin checks on Spectrum2DCollection need adapting to interpret 3-D z_data array correctly. This implementation is a bit inefficient as it needlessly instantiates a Spectrum2D; leave more verbose/efficient implementation for upstream.
This functionality is also available in Euphonic, and it would make sense to remove this entirely from Abins and use it there. However - this will need a bit of tinkering to work with Spectrum2DCollection, and doing it upstream would avoid a lot of duplication - it would be better to apply kinematic constraints (and instrument broadening) after spectra have been filtered and summed into output groups So let's defer that for now
- Push SData a bit deeper in the chain of functions - Stop using SDataByAngle, instead use angle metadata in SpectrumCollection objects - Patch SpectrumCollection so that group_by() and select() work as expected (i.e. run succesfully) when some keys are at top level of metadata (i.e. common to all lines). - Apply broadening to SpectrumCollection objects. For now we keep using Abins implementation of broadening so that tests pass; this should be replaced with Euphonic implementation later.
This is a significant step towards removing SData entirely. For compatibility at this point, Spectrum objects mostly store the bin centres rather than bin edges. When SData is gone it would nicer to move to storing bin edges. (Centres can be computed from edges, but edges can only be guessed from centres...)
Officially the Spectrum metadata should not include float because they can't be reliably compared for equality
Continue dissolving SData: apply DW to output spectra, drop now-unused method from class
- create new autoconvolution routine that works with spectrum collection - integrate downsampling into autoconvolution loop over atoms - this could save a bit of memory, or benefit from parallism over atoms - not clear if benefit is large at this point, but it doesn't add much complexity either. YAGNI? - apply q2 order corrections, converting 1D initial spectra to 2D maps for quantum orders with isotropic Debye-Waller correction (i.e. all orders above fundamental) - drop _broaden_sdata as this is no longer needed
The Spectrum1DCollection/Spectrum2DCollection classes allow arbitrary metadata, so there is no need for a technical distinction between the class that collects over atoms and quantum orders and one that collects over angles.
For now, fallback via SData if no isotroptic calculation
- Rather than go through the data transformation on arrays of zeros, we can return an almost-empty SpectrumCollection immediately. - That finally eliminates SData from this module! There is definitely room for optimisation as autoconvolution has become more expensive, but the order-1 stuff seems to be running in similar time. It would be a good idea to do some profiling later... - As part of rebasing this, restored some missing units to spowdersemiempiricalcalculator and fixed AbinsSDataTest. Sorry, it should be on another commit but this is a big job and would benefit from some squashing later.
Finally, the cathartic part of the refactor where I get to delete 500 lines of code at once
This was breaking a doctest
Remove innermost loop of Spectrum creation by assembling y_data array directly.
The .from_spectra() constructor has a lot of overhead as it checks the types and numerical agreement of axes; if we are comparing Collections this should only be checked once. If this tweak works well it should move upstream to euphonic. Here we refer to a private method on the 1D collection from the 2D collection: really this can live on a parent class.
Not seeing a big impact on test timings here, but this _should_ scale better as we add more atoms.
- The euphonic SpectumCollection select() method doesn't work when all lines in the collection match the criterion, because such metadata is moved into the top-level metadata rather than the per-line section which is checked. The key is therefore treated as missing. - Compounding this, the data container cannot be entirely empty, so it allows an error to be raised if no matches are found. - More reasonably, the `from_spectra()` constructor cannot operate on an empty list, as this means no (common) axis information woudl be available. In this commit we avoid calling this when no data is available.
Spectrum.x_data (etc.) properties are quite expensive to call as they trigger a call to ureg to build Units and call Quantity.to() to ensure output unit is consistent with the stored unit name. Try to avoid calling it inside a loop! In practice this means using private attributes a bit more; not ideal, but we pin the Euphonic version and I maintain the package so for now it is safe. Avoid calling `* ureg` where we know we have an array and can build Quantity directly. This saves on some type-checking. Implement an "unsafe" constructor for Spectrum1DCollection from a series of Spectrum1D when we already know that the axes will be consistent. Again this saves a lot of time spent on consistency checks.
This iterator makes it more practical to iterate over the content of a spectrum collection without (slowly) creating new Spectrum objects Profiling shows a dramatic impact in the places changed so far.
Currently multiprocessing interacts poorly with systemtests on platforms other than linux: we end up with nested multiprocessing and the tests hang indefinitely.
Aborted systemtest, as expected tweaking the multiprocessing backend caused it to hang on Linux as well. |
Performance check: full run of Abins for TOSCA using gamma-point phonon modes for a 250-atom amorphous Ge cell (calculated with a MLIP foundation model). Including enumerated quantum order 2 and convolution up to order 10. Running on Macbook Pro M3 (i.e. 12-core ARM) Nightly from May (i.e. pre-refactor): 50s Basically the places parallelism has been implemented so far are no longer a bottleneck due to other performance improvements in Euphonic. There is likely a good performance gain to be had by parallelising other parts of Abins, but it's ok to strip them out here for simplicity and stability. |
Windows |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for spotting this tricky bug. The changes remove the multiprocessing code that caused the hanging issue previously. Could you please revert the changes in run-tests
then do a full build_packages_from_branch
to make sure it passes in all operating systems before merging into main
This reverts commit 36b203a.
Replaced with one build instead of running separate ones |
I have rebuild the packaging because it failed in the first run: https://builds.mantidproject.org/job/build_packages_from_branch/1219/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Build passed successfully. Thanks for the fix
It wasn't easy making sense of the log output on the failed run! Looks like 3/4 platforms built and tested fine and the other one barely started? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple of code comments on tests and imports:
For cleaner and safer (parallel) systemtests, run each Abins systemtest case with its own cache directory and clean up using tempfile mechanisms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me now, thanks!
…Euphonic features (mantidproject#39669) ### Description of work This reinstates mantidproject#37350 , which was reverted in mantidproject#39665 This was done in order to stop breakage of package build tests on non-Linux platforms. It seems this is caused by an incompatibility between multiprocessing "spawn" and the systemtest code, which also spawns Python subprocesses. This mechanism is not supposed to be nested. On Linux the default mechanism is "fork" which seems to work in parallel systemtest, but from Python 3.14 this default will also change to "spawn" and start failing. #### Summary of work - The branch from mantidproject#37350 has been rebased back onto `main` - The conditions for hanging test suites were investigated - Multiprocessing code is removed from Abins and replaced by serial `map` operations. - The performance impact seems to be small, due to optimisations in Euphonic which came after the parallel operations were originally introduced to the branch. #### Purpose of work See mantidproject#37350 #### Further detail of work It's not great that multiprocessing is incompatible with the systemtest suite. In this case we can manage without, but there is a good chance we will want to use it in future. Multithreading is probably ok, but these are not equivalent tools. ### To test: Everything is covered by CI. In addition to the default stuff we need to check `build_packages_from_branch` on non-Linux platforms as this was breaking. This shouldn't require a release note because mantidproject#37350 didn't. --- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This gets the new pre-commit configuration and dependency changes. As a result, it also grabs a bunch of work for abins to get past merge conflicts * #39968 * #39665 * #39502 * #39669 * #39739 * #39881 --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Mohammed Almakki <44416400+MohamedAlmaki@users.noreply.github.com> Co-authored-by: Adam J. Jackson <a.j.jackson@physics.org>
Description of work
This reinstates #37350 , which was reverted in #39665
This was done in order to stop breakage of package build tests on non-Linux platforms.
It seems this is caused by an incompatibility between multiprocessing "spawn" and the systemtest code, which also spawns Python subprocesses. This mechanism is not supposed to be nested. On Linux the default mechanism is "fork" which seems to work in parallel systemtest, but from Python 3.14 this default will also change to "spawn" and start failing.
Summary of work
main
map
operations.Purpose of work
See #37350
Further detail of work
It's not great that multiprocessing is incompatible with the systemtest suite. In this case we can manage without, but there is a good chance we will want to use it in future. Multithreading is probably ok, but these are not equivalent tools.
To test:
Everything is covered by CI. In addition to the default stuff we need to check
build_packages_from_branch
on non-Linux platforms as this was breaking.This shouldn't require a release note because #37350 didn't.
Reviewer
Please comment on the points listed below (full description).
Your comments will be used as part of the gatekeeper process, so please comment clearly on what you have checked during your review. If changes are made to the PR during the review process then your final comment will be the most important for gatekeepers. In this comment you should make it clear why any earlier review is still valid, or confirm that all requested changes have been addressed.
Code Review
Functional Tests
Does everything look good? Mark the review as Approve. A member of
@mantidproject/gatekeepers
will take care of it.Gatekeeper
If you need to request changes to a PR then please add a comment and set the review status to "Request changes". This will stop the PR from showing up in the list for other gatekeepers.