Skip to content

Release 2.0.4 #510

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 77 commits into from
Jun 17, 2025
Merged
Show file tree
Hide file tree
Changes from 58 commits
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
b4d0a72
Subset arrays (#411)
eodole May 6, 2025
4140afe
[no ci] docs: start of user guide - draft intro, gen models
vpratz May 7, 2025
728b130
[no ci] add draft for data processing section
vpratz May 7, 2025
d461d60
[no ci] user guide: add stub on summary/inference networks
vpratz May 7, 2025
21690fd
[no ci] user guide: add stub on additional topics
vpratz May 7, 2025
6ecd258
[no ci] add early stage disclaimer to user guide
vpratz May 8, 2025
6990d08
pin dependencies in docs, fixes snowballstemmer error
vpratz May 8, 2025
a4d58c9
fix: correct check for "no accepted samples" in rejection_sample
vpratz May 8, 2025
76500df
Stabilize MultivariateNormalScore by constraining initialization in P…
han-ol May 8, 2025
55d3536
Augmentation (#470)
stefanradev93 May 8, 2025
b28ae2c
Fixed log det jac computation of standardize transform
han-ol May 9, 2025
0ed0e88
Merge pull request #471 from han-ol/fix-standardze-log-det-jac
han-ol May 9, 2025
265f667
Fix fill_triangular_matrix
han-ol May 9, 2025
ccf9ca0
Deal with inference_network.log_prob to return dict (as PointInferenc…
han-ol May 9, 2025
e2c8304
Add diffusion model implementation (#408)
vpratz May 12, 2025
fef8b90
[no ci] Merge remote-tracking branch 'upstream/dev' into docs-user-guide
vpratz May 12, 2025
e13f7c4
[no ci] networks docstrings: summary/inference network indicator (#462)
vpratz May 12, 2025
35cd671
`ModelComparisonSimulator`: handle different outputs from individual …
Kucharssim May 13, 2025
afef095
Add classes and transforms to simplify multimodal training (#473)
vpratz May 14, 2025
92426d6
allow dispatch of summary/inference network from type
vpratz May 20, 2025
25b73d3
Add squeeze transform
vpratz May 20, 2025
56ddd99
[no ci] fix examples in ExpandDims docstring
vpratz May 20, 2025
98a6bca
squeeze: adapt example, add comment for changing batch dims
vpratz May 20, 2025
3c0bcc4
Merge remote-tracking branch 'upstream/dev' into feat-squeeze-transform
vpratz May 20, 2025
4781e2e
Permit Python version 3.12 (#474)
vpratz May 21, 2025
cd2c212
Change order in readme and reference new book [skip ci]
stefanradev93 May 26, 2025
12c72bc
make docs optional dependencies compatible with python 3.10
daniel-habermann May 26, 2025
1fe8c94
Add a custom `Sequential` network to avoid issues with building and s…
LarsKue May 26, 2025
e13c944
Merge pull request #495 from daniel-habermann/fix-doc-dependencies
vpratz May 27, 2025
361fa45
Add Nnpe adapter class (#488)
elseml May 27, 2025
38186ec
Align diffusion model with other inference networks and remove deprec…
stefanradev93 May 29, 2025
13295ee
add replace nan adapter (#459)
arrjon May 29, 2025
619d14b
[no ci] docs: add basic likelihood estimation example
vpratz May 29, 2025
1ea451b
make metrics serializable
vpratz May 30, 2025
09fa9c6
Remove layer norm; add epsilon to std dev for stability of pos def link
han-ol May 30, 2025
c1fcc83
add end-to-end test for fusion network
vpratz Jun 1, 2025
01aadf1
fix: ensure that build is called in FusionNetwork
vpratz Jun 1, 2025
996a700
Correctly track train / validation losses (#485)
LarsKue Jun 1, 2025
677bacb
Add shuffle parameter to datasets
elseml Jun 1, 2025
928fbce
fix: correct vjp/jvp calls in FreeFormFlow
vpratz Jun 2, 2025
8c3301f
test: add basic compute_metrics test for inference networks
vpratz Jun 2, 2025
fc98328
[no ci] extend point approximator tests
vpratz Jun 2, 2025
0205c3b
[no ci] skip unstable MVN sample test again
vpratz Jun 2, 2025
20ecbfc
update README with more specific install instructions
vpratz Jun 2, 2025
82d088c
fix FreeFormFlow: remove superfluous index form signature change
vpratz Jun 3, 2025
dc02245
[no ci] FreeFormFlow MLP defaults: set dropout to 0
vpratz Jun 3, 2025
6acc628
Better pairplots (#505)
jerrymhuang Jun 4, 2025
449a79a
[no ci] Formatting: escaped space only in raw strings
vpratz Jun 5, 2025
2e6dba5
[no ci] fix typo in error message, model comparison approximator
vpratz Jun 7, 2025
2c90547
[no ci] fix: size_of could not handle basic int/float
vpratz Jun 7, 2025
ac0461a
add tests for model comparison approximator
vpratz Jun 7, 2025
735969c
Generalize sample shape to arbitrary N-D arrays
stefanradev93 Jun 9, 2025
68c6c92
[WIP] Move standardization into approximators and make adapter statel…
stefanradev93 Jun 12, 2025
37fd0ed
Replace deprecation with FutureWarning
stefanradev93 Jun 12, 2025
c230c8e
Adjust filename for LV
stefanradev93 Jun 13, 2025
ca9e245
Fix types for subnets
stefanradev93 Jun 13, 2025
a611f70
[no ci] minor fixes to RandomSubsample transform
vpratz Jun 14, 2025
ff2db55
[no ci] remove subnet deprecation in cont-time CM
vpratz Jun 14, 2025
7071f1a
Merge branch 'main' into dev
stefanradev93 Jun 14, 2025
d42fe51
Remove empty file [no ci]
stefanradev93 Jun 14, 2025
c44333d
Revert layer type for coupling flow [skip ci]
stefanradev93 Jun 14, 2025
9018ce6
remove failing import due to removed find_noise_schedule.py [no ci]
vpratz Jun 14, 2025
b43f1cc
Add utility function for batched simulations (#511)
vpratz Jun 16, 2025
a8e9a72
Restore PositiveDefinite link with deprecation warning
han-ol Jun 16, 2025
990df1e
skip cycle consistency test for diffusion models
vpratz Jun 16, 2025
1d4f646
Implement changes to NNPE adapter for #510 (#514)
elseml Jun 16, 2025
664f00a
[no ci] remove unnecessary serializable decorator on rmse
vpratz Jun 16, 2025
5659773
fix type hint in squeeze [no ci]
vpratz Jun 16, 2025
9030803
reintroduce comment in jax approximator [no ci]
vpratz Jun 16, 2025
1cd8ffb
remove unnecessary getattr calls [no ci]
vpratz Jun 16, 2025
3c9797d
Rename local variable transformation_type
han-ol Jun 16, 2025
f7fe860
fix error type in diffusion model [no ci]
vpratz Jun 16, 2025
212af40
remove non-functional per_training_step from plots.loss
vpratz Jun 16, 2025
676b0cd
Update doc [skip ci]
stefanradev93 Jun 17, 2025
329ebe7
rename approximator.summaries to summarize with deprecation
vpratz Jun 17, 2025
00f0a89
address remaining comments
LarsKue Jun 17, 2025
057f3fd
Merge pull request #516 from bayesflow-org/rename-summaries
vpratz Jun 17, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -39,3 +39,6 @@ docs/

# MacOS
.DS_Store

# Rproj
.Rproj.user
79 changes: 42 additions & 37 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,45 +49,12 @@ neural networks for parameter estimation, model comparison, and model validation
when working with intractable simulators whose behavior as a whole is too
complex to be described analytically.

## Getting Started

Using the high-level interface is easy, as demonstrated by the minimal working example below:

```python
import bayesflow as bf

workflow = bf.BasicWorkflow(
inference_network=bf.networks.CouplingFlow(),
summary_network=bf.networks.TimeSeriesNetwork(),
inference_variables=["parameters"],
summary_variables=["observables"],
simulator=bf.simulators.SIR()
)

history = workflow.fit_online(epochs=15, batch_size=32, num_batches_per_epoch=200)

diagnostics = workflow.plot_default_diagnostics(test_data=300)
```

For an in-depth exposition, check out our walkthrough notebooks below.

1. [Linear regression starter example](examples/Linear_Regression_Starter.ipynb)
2. [From ABC to BayesFlow](examples/From_ABC_to_BayesFlow.ipynb)
3. [Two moons starter example](examples/Two_Moons_Starter.ipynb)
4. [Rapid iteration with point estimators](examples/Lotka_Volterra_Point_Estimation_and_Expert_Stats.ipynb)
5. [SIR model with custom summary network](examples/SIR_Posterior_Estimation.ipynb)
6. [Bayesian experimental design](examples/Bayesian_Experimental_Design.ipynb)
7. [Simple model comparison example](examples/One_Sample_TTest.ipynb)
8. [Moving from BayesFlow v1.1 to v2.0](examples/From_BayesFlow_1.1_to_2.0.ipynb)

More tutorials are always welcome! Please consider making a pull request if you have a cool application that you want to contribute.

## Install

You can install the latest stable version from PyPI using:
We currently support Python 3.10 to 3.12. You can install the latest stable version from PyPI using:

```bash
pip install bayesflow
pip install "bayesflow>=2.0"
```

If you want the latest features, you can install from source:
Expand Down Expand Up @@ -132,9 +99,47 @@ export KERAS_BACKEND=jax

This way, you also don't have to manually set the backend every time you are starting Python to use BayesFlow.

**Caution:** Some development environments (e.g., VSCode or PyCharm) can silently overwrite environment variables. If you have set your backend as an environment variable and you still get keras-related import errors when loading BayesFlow, these IDE shenanigans might be the culprit. Try setting the keras backend in your Python script via `import os; os.environ["KERAS_BACKEND"] = "<YOUR-BACKEND>"`.
## Getting Started

Using the high-level interface is easy, as demonstrated by the minimal working example below:

```python
import bayesflow as bf

workflow = bf.BasicWorkflow(
inference_network=bf.networks.CouplingFlow(),
summary_network=bf.networks.TimeSeriesNetwork(),
inference_variables=["parameters"],
summary_variables=["observables"],
simulator=bf.simulators.SIR()
)

history = workflow.fit_online(epochs=15, batch_size=32, num_batches_per_epoch=200)

diagnostics = workflow.plot_default_diagnostics(test_data=300)
```

For an in-depth exposition, check out our expanding list of resources below.

### Books

Many examples from [Bayesian Cognitive Modeling: A Practical Course](https://bayesmodels.com/) by Lee & Wagenmakers (2013) in [BayesFlow](https://kucharssim.github.io/bayesflow-cognitive-modeling-book/).

### Tutorial notebooks

1. [Linear regression starter example](examples/Linear_Regression_Starter.ipynb)
2. [From ABC to BayesFlow](examples/From_ABC_to_BayesFlow.ipynb)
3. [Two moons starter example](examples/Two_Moons_Starter.ipynb)
4. [Rapid iteration with point estimators](examples/Lotka_Volterra_Point_Estimation.ipynb)
5. [SIR model with custom summary network](examples/SIR_Posterior_Estimation.ipynb)
6. [Bayesian experimental design](examples/Bayesian_Experimental_Design.ipynb)
7. [Simple model comparison example](examples/One_Sample_TTest.ipynb)
8. [Likelihood estimation](examples/Likelihood_Estimation.ipynb)
9. [Moving from BayesFlow v1.1 to v2.0](examples/From_BayesFlow_1.1_to_2.0.ipynb)

More tutorials are always welcome! Please consider making a pull request if you have a cool application that you want to contribute.

### From Source
## Contributing

If you want to contribute to BayesFlow, we recommend installing it from source, see [CONTRIBUTING.md](CONTRIBUTING.md) for more details.

Expand Down
199 changes: 198 additions & 1 deletion bayesflow/adapters/adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,17 +14,24 @@
Drop,
ExpandDims,
FilterTransform,
Group,
Keep,
Log,
MapTransform,
NNPE,
NumpyTransform,
OneHot,
Rename,
SerializableCustomTransform,
Squeeze,
Sqrt,
Standardize,
ToArray,
Transform,
Ungroup,
RandomSubsample,
Take,
NanToNum,
)
from .transforms.filter_transform import Predicate

Expand Down Expand Up @@ -598,6 +605,52 @@ def expand_dims(self, keys: str | Sequence[str], *, axis: int | tuple):
self.transforms.append(transform)
return self

def group(self, keys: Sequence[str], into: str, *, prefix: str = ""):
"""Append a :py:class:`~transforms.Group` transform to the adapter.

Groups the given variables as a dictionary in the key `into`. As most transforms do
not support nested structures, this should usually be the last transform in the adapter.

Parameters
----------
keys : Sequence of str
The names of the variables to group together.
into : str
The name of the variable to store the grouped variables in.
prefix : str, optional
An optional common prefix of the variable names before grouping, which will be removed after grouping.

Raises
------
ValueError
If a prefix is specified, but a provided key does not start with the prefix.
"""
if isinstance(keys, str):
keys = [keys]

transform = Group(keys=keys, into=into, prefix=prefix)
self.transforms.append(transform)
return self

def ungroup(self, key: str, *, prefix: str = ""):
"""Append an :py:class:`~transforms.Ungroup` transform to the adapter.

Ungroups the the variables in `key` from a dictionary into individual entries. Most transforms do
not support nested structures, so this can be used to flatten a nested structure.
The nesting can be re-established after the transforms using the :py:meth:`group` method.

Parameters
----------
key : str
The name of the variable to ungroup. The corresponding variable has to be a dictionary.
prefix : str, optional
An optional common prefix that will be added to the ungrouped variable names. This can be necessary
to avoid duplicate names.
"""
transform = Ungroup(key=key, prefix=prefix)
self.transforms.append(transform)
return self

def keep(self, keys: str | Sequence[str]):
"""Append a :py:class:`~transforms.Keep` transform to the adapter.

Expand Down Expand Up @@ -648,6 +701,43 @@ def map_dtype(self, keys: str | Sequence[str], to_dtype: str):
self.transforms.append(transform)
return self

def nnpe(
self,
keys: str | Sequence[str],
*,
spike_scale: float | None = None,
slab_scale: float | None = None,
per_dimension: bool = True,
seed: int | None = None,
):
"""Append an :py:class:`~transforms.NNPE` transform to the adapter.

Parameters
----------
keys : str or Sequence of str
The names of the variables to transform.
spike_scale : float or np.ndarray or None, default=None
The scale of the spike (Normal) distribution. Automatically determined if None.
slab_scale : float or np.ndarray or None, default=None
The scale of the slab (Cauchy) distribution. Automatically determined if None.
per_dimension : bool, default=True
If true, noise is applied per dimension of the last axis of the input data.
If false, noise is applied globally.
seed : int or None
The seed for the random number generator. If None, a random seed is used.
"""
if isinstance(keys, str):
keys = [keys]

transform = MapTransform(
{
key: NNPE(spike_scale=spike_scale, slab_scale=slab_scale, per_dimension=per_dimension, seed=seed)
for key in keys
}
)
self.transforms.append(transform)
return self

def one_hot(self, keys: str | Sequence[str], num_classes: int):
"""Append a :py:class:`~transforms.OneHot` transform to the adapter.

Expand All @@ -665,6 +755,28 @@ def one_hot(self, keys: str | Sequence[str], num_classes: int):
self.transforms.append(transform)
return self

def random_subsample(self, key: str, *, sample_size: int | float, axis: int = -1):
"""
Append a :py:class:`~transforms.RandomSubsample` transform to the adapter.

Parameters
----------
key : str or Sequence of str
The name of the variable to subsample.
sample_size : int or float
The number of samples to draw, or a fraction between 0 and 1 of the total number of samples to draw.
axis: int, optional
Which axis to draw samples over. The last axis is used by default.
"""

if not isinstance(key, str):
raise TypeError("Can only subsample one batch entry at a time.")

transform = MapTransform({key: RandomSubsample(sample_size=sample_size, axis=axis)})

self.transforms.append(transform)
return self

def rename(self, from_key: str, to_key: str):
"""Append a :py:class:`~transforms.Rename` transform to the adapter.

Expand Down Expand Up @@ -708,6 +820,24 @@ def split(self, key: str, *, into: Sequence[str], indices_or_sections: int | Seq

return self

def squeeze(self, keys: str | Sequence[str], *, axis: int | tuple):
"""Append a :py:class:`~transforms.Squeeze` transform to the adapter.

Parameters
----------
keys : str or Sequence of str
The names of the variables to squeeze.
axis : int or tuple
The axis to squeeze. As the number of batch dimensions might change, we advise using negative
numbers (i.e., indexing from the end instead of the start).
"""
if isinstance(keys, str):
keys = [keys]

transform = MapTransform({key: Squeeze(axis=axis) for key in keys})
self.transforms.append(transform)
return self

def sqrt(self, keys: str | Sequence[str]):
"""Append an :py:class:`~transforms.Sqrt` transform to the adapter.

Expand Down Expand Up @@ -741,7 +871,7 @@ def standardize(
Names of variables to include in the transform.
exclude : str or Sequence of str, optional
Names of variables to exclude from the transform.
**kwargs : dict
**kwargs :
Additional keyword arguments passed to the transform.
"""
transform = FilterTransform(
Expand All @@ -754,6 +884,42 @@ def standardize(
self.transforms.append(transform)
return self

def take(
self,
include: str | Sequence[str] = None,
*,
indices: Sequence[int],
axis: int = -1,
predicate: Predicate = None,
exclude: str | Sequence[str] = None,
):
"""
Append a :py:class:`~transforms.Take` transform to the adapter.

Parameters
----------
include : str or Sequence of str, optional
Names of variables to include in the transform.
indices : Sequence of int
Which indices to take from the data.
axis : int, optional
Which axis to take from. The last axis is used by default.
predicate : Predicate, optional
Function that indicates which variables should be transformed.
exclude : str or Sequence of str, optional
Names of variables to exclude from the transform.
"""
transform = FilterTransform(
transform_constructor=Take,
predicate=predicate,
include=include,
exclude=exclude,
indices=indices,
axis=axis,
)
self.transforms.append(transform)
return self

def to_array(
self,
include: str | Sequence[str] = None,
Expand Down Expand Up @@ -791,3 +957,34 @@ def to_dict(self):
transform = ToDict()
self.transforms.append(transform)
return self

def nan_to_num(
self,
keys: str | Sequence[str],
default_value: float = 0.0,
return_mask: bool = False,
mask_prefix: str = "mask",
):
"""
Append :py:class:`~bf.adapters.transforms.NanToNum` transform to the adapter.

Parameters
----------
keys : str or sequence of str
The names of the variables to clean / mask.
default_value : float
Value to substitute wherever data is NaN. Defaults to 0.0.
return_mask : bool
If True, encode a binary missingness mask alongside the data. Defaults to False.
mask_prefix : str
Prefix for the mask key in the output dictionary. Defaults to 'mask_'. If the mask key already exists,
a ValueError is raised to avoid overwriting existing masks.
"""
if isinstance(keys, str):
keys = [keys]

for key in keys:
self.transforms.append(
NanToNum(key=key, default_value=default_value, return_mask=return_mask, mask_prefix=mask_prefix)
)
return self
7 changes: 7 additions & 0 deletions bayesflow/adapters/transforms/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,28 @@
from .elementwise_transform import ElementwiseTransform
from .expand_dims import ExpandDims
from .filter_transform import FilterTransform
from .group import Group
from .keep import Keep
from .log import Log
from .map_transform import MapTransform
from .nnpe import NNPE
from .numpy_transform import NumpyTransform
from .one_hot import OneHot
from .rename import Rename
from .scale import Scale
from .serializable_custom_transform import SerializableCustomTransform
from .shift import Shift
from .split import Split
from .squeeze import Squeeze
from .sqrt import Sqrt
from .standardize import Standardize
from .to_array import ToArray
from .to_dict import ToDict
from .transform import Transform
from .random_subsample import RandomSubsample
from .take import Take
from .ungroup import Ungroup
from .nan_to_num import NanToNum

from ...utils._docs import _add_imports_to_all

Expand Down
Loading