-
Notifications
You must be signed in to change notification settings - Fork 69
Release 2.0.4 #510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 2.0.4 #510
Conversation
* made initial backend functions for adapter subsetting, need to still make the squeeze function and link it to the front end * added subsample functionality, to do would be adding them to testing procedures * made the take function and ran the linter * changed name of subsampling function * changed documentation, to be consistent with external notation, rather than internal shorthand * small formation change to documentation * changed subsample to have sample size and axis in the constructor * moved transforms in the adapter.py so they're in alphabetical order like the other transforms * changed random_subsample to maptransform rather than filter transform * updated documentation with new naming convention * added arguments of take to the constructor * added feature to specify a percentage of the data to subsample rather than only integer input * changed subsample in adapter.py to allow float as an input for the sample size * renamed subsample_array and associated classes/functions to RandomSubsample and random_subsample respectively * included TypeError to force users to only subsample one dataset at a time * ran linter * rerun formatter * clean up random subsample transform and docs * clean up take transform and docs * nitpick clean-up * skip shape check for subsampled adapter transform inverse * fix serialization of new transforms * skip randomly subsampled key in serialization consistency check --------- Co-authored-by: LarsKue <lars@kuehmichel.de>
* Remove old rounds data set, add documentation, and augmentation options to data sets * Enable augmentation to parts of the data or the whole data * Improve doc * Enable augmentations in workflow * Fix silly type check and improve readability of for loop * Bring back num_batches
y = (x - mu) / sigma log p(y) = log p(x) - log(sigma)
Fixed Jacobian computation of standardize transform
The two lines were switched, leading to performance degradation.
This commit contains the following changes (see PR #408 for discussions) - DiffusionModel following the formalism in Kingma et. al (2023) [1] - Stochastic sampler to solve SDEs - Tests for the diffusion model [1] https://arxiv.org/abs/2303.00848 --------- Co-authored-by: arrjon <jonas.arruda@uni-bonn.de> Co-authored-by: Jonas Arruda <69197639+arrjon@users.noreply.github.com> Co-authored-by: LarsKue <lars@kuehmichel.de>
- From the table in the `bayesflow.networks` module overview, one cannot tell which network belongs to which group. This commit adds short labels to indicate inference networks (IN) and summary networks (SN)
* Add classes and transforms to simplify multimodal training - Add class `MultimodalSummaryNetwork` to combine multiple summary networks, each for one modality. - Add transforms `Group` and `Ungroup`, to gather the multimodal inputs in one variable (usually "summary_variables") - Add tests for new behavior * [no ci] add tutorial notebook for multimodal data * [no ci] add missing training argument * rename MultimodalSummaryNetwork to FusionNetwork * [no ci] clarify that the network implements late fusion
* add tests for find_network
Very basic transform, just the inverse of expand_dims
Allow Python version 3.12 after successful CI run: https://github.com/bayesflow-org/bayesflow/actions/runs/14988542031
…erialization in keras (#493) * add custom sequential to fix #491 * revert using Sequential in classifier_two_sample_test.py * Add docstring to custom Sequential Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix copilot docstring * remove mlp override methods --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Make docs optional dependencies compatible with python 3.10
* Add NNPE adapter * Add NNPE adapter tests * Only apply NNPE during training * Integrate stage differentiation into tests * Improve test coverage * Fix inverse and add to tests * Adjust class name and add docstring to forward method * Enable compatibility with #486 by adjusting scales automatically * Add dimensionwise noise application * Update exception handling * Fix tests
@stefanradev93 I cannot reproduce the failure locally, can you? I think skipping this specific test for the diffusion models (e.g. via a marker) would be acceptable, as we do not expect cycle consistency to strictly hold for the number of steps we want to put in here, and the errors can accumulate in more or less arbitrary ways. We might want to use (at least somewhat) trained models in the future, where we can expect more consistent results. |
The error is stochastic and I think you are right. We should not rely on this test for now. It is still a bit strange that it's only the case for JAX. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a detailed review. See individual comments.
The implementation is a simple wrapper leveraging the batching capabilities of `rejection_sample`.
- the test is unstable for untrained diffusion models, as the networks output is not sufficiently smooth for the step size we use - remove the diffusion_model marker
Thanks everyone for the massive work in contributing these features and also in cleaning up the parts that I commented so quickly! |
Rename approximator.summaries to summarize with deprecation
Summary
This PR introduces the next set of improvements and qualitative changes for the upcoming v2.0.4 release.
Major Changes
Approximators
classesApproximation
, eliminating the need for stateful adaptersBug Fixes
pairplot
rendering issues