Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
175 changes: 124 additions & 51 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,48 +5,80 @@
[![GitHub license](https://img.shields.io/github/license/sbi-dev/sbi)](https://github.com/sbi-dev/sbi/blob/master/LICENSE.txt)
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02505/status.svg)](https://doi.org/10.21105/joss.02505)

## sbi: simulation-based inference
## `sbi`: Simulation-Based Inference

[Getting Started](https://sbi-dev.github.io/sbi/latest/tutorial/00_getting_started/) | [Documentation](https://sbi-dev.github.io/sbi/)
[Getting Started](https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started/) |
[Documentation](https://sbi-dev.github.io/sbi/)

`sbi` is a PyTorch package for simulation-based inference. Simulation-based inference is
the process of finding parameters of a simulator from observations.
`sbi` is a Python package for simulation-based inference, designed to meet the needs of
both researchers and practitioners. Whether you need fine-grained control or an
easy-to-use interface, `sbi` has you covered.

`sbi` takes a Bayesian approach and returns a full posterior distribution over the
parameters of the simulator, conditional on the observations. The package implements a
variety of inference algorithms, including _amortized_ and _sequential_ methods.
Amortized methods return a posterior that can be applied to many different observations
without retraining; sequential methods focus the inference on one particular observation
to be more simulation-efficient. See below for an overview of implemented methods.
With `sbi`, you can perform simulation-based inference (SBI) using a Bayesian approach:
Given a simulator that models a real-world process, SBI estimates the full posterior
distribution over the simulator’s parameters based on observed data. This distribution
indicates the most likely parameter values while additionally quantifying uncertainty
and revealing potential interactions between parameters.

`sbi` offers a simple interface for posterior inference in a few lines of code
### Key Features of `sbi`

`sbi` offers a blend of flexibility and ease of use:

- **Low-Level Interfaces**: For those who require maximum control over the inference
process, `sbi` provides low-level interfaces that allow you to fine-tune many aspects
of your workflow.
- **High-Level Interfaces**: If you prefer simplicity and efficiency, `sbi` also offers
high-level interfaces that enable quick and easy implementation of complex inference
tasks.

In addition, `sbi` supports a wide range of state-of-the-art inference algorithms (see
below for a list of implemented methods):

- **Amortized Methods**: These methods enable the reuse of posterior estimators across
multiple observations without the need to retrain.
- **Sequential Methods**: These methods focus on individual observations, optimizing the
number of simulations required.

Beyond inference, `sbi` also provides:

- **Validation Tools**: Built-in methods to validate and verify the accuracy of your
inferred posteriors.
- **Plotting and Analysis Tools**: Comprehensive functions for visualizing and analyzing
results, helping you interpret the posterior distributions with ease.

Getting started with `sbi` is straightforward, requiring only a few lines of code:

```python
from sbi.inference import SNPE
# import your simulator, define your prior over the parameters
# sample parameters theta and observations x
# Given: parameters theta and corresponding simulations x
inference = SNPE(prior=prior)
_ = inference.append_simulations(theta, x).train()
inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
```

## Installation
### Installation

`sbi` requires Python 3.8 or higher. A GPU is not required, but can lead to speed-up in some cases. We recommend to use a [`conda`](https://docs.conda.io/en/latest/miniconda.html) virtual
environment ([Miniconda installation instructions](https://docs.conda.io/en/latest/miniconda.html)). If `conda` is installed on the system, an environment for installing `sbi` can be created as follows:
`sbi` requires Python 3.9 or higher. While a GPU isn't necessary, it can improve
performance in some cases. We recommend using a virtual environment with
[`conda`](https://docs.conda.io/en/latest/miniconda.html) for an easy setup.

```commandline
# Create an environment for sbi (indicate Python 3.8 or higher); activate it
$ conda create -n sbi_env python=3.10 && conda activate sbi_env
```
To install `sbi`, follow these steps:

Independent of whether you are using `conda` or not, `sbi` can be installed using `pip`:
1. **Create a Conda Environment** (if using Conda):

```commandline
pip install sbi
```
```bash
conda create -n sbi_env python=3.9 && conda activate sbi_env
```

2. **Install `sbi`**: Independent of whether you are using `conda` or not, `sbi` can be
installed using `pip`:

```commandline
pip install sbi
```

To test the installation, drop into a python prompt and run
3. **Test the installation**:
Open a Python prompt and run

```python
from sbi.examples.minimal import simple
Expand All @@ -57,57 +89,93 @@ print(posterior)
## Tutorials

If you're new to `sbi`, we recommend starting with our [Getting
Started](https://sbi-dev.github.io/sbi/latest/tutorial/00_getting_started/) tutorial.
Started](https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started/) tutorial.

You can easily access and run these tutorials by opening a
[Codespace](https://docs.github.com/en/codespaces/overview) on this repo. To do
so, click on the green "Code" button and select "Open with Codespaces". This will
provide you with a fully functional environment where you can run the tutorials as
Jupyter notebooks.
You can also access and run these tutorials directly in your browser by opening
[Codespace](https://docs.github.com/en/codespaces/overview). To do so, click the green
“Code” button on the GitHub repository and select “Open with Codespaces.” This provides
a fully functional environment where you can explore `sbi` through Jupyter notebooks.

## Inference Algorithms

The following inference algorithms are currently available. You can find instructions on how to run each of these methods [here](https://sbi-dev.github.io/sbi/latest/tutorial/16_implemented_methods/).
The following inference algorithms are currently available. You can find instructions on
how to run each of these methods
[here](https://sbi-dev.github.io/sbi/latest/tutorials/16_implemented_methods/).

### Neural Posterior Estimation: amortized (NPE) and sequential (SNPE)

* [`SNPE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snpe.snpe_a.SNPE_A) (including amortized single-round `NPE`) from Papamakarios G and Murray I [_Fast ε-free Inference of Simulation Models with Bayesian Conditional Density Estimation_](https://proceedings.neurips.cc/paper/2016/hash/6aca97005c68f1206823815f66102863-Abstract.html) (NeurIPS 2016).
* [`(S)NPE_A`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.npe.npe_a.NPE_A)
(including amortized single-round `NPE`) from Papamakarios G and Murray I [_Fast
ε-free Inference of Simulation Models with Bayesian Conditional Density
Estimation_](https://proceedings.neurips.cc/paper/2016/hash/6aca97005c68f1206823815f66102863-Abstract.html)
(NeurIPS 2016).

* [`SNPE_C`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snpe.snpe_c.SNPE_C) or `APT` from Greenberg D, Nonnenmacher M, and Macke J [_Automatic
Posterior Transformation for likelihood-free
inference_](https://arxiv.org/abs/1905.07488) (ICML 2019).
* [`(S)NPE_C`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.npe.npe_c.NPE_C)
or `APT` from Greenberg D, Nonnenmacher M, and Macke J [_Automatic Posterior
Transformation for likelihood-free inference_](https://arxiv.org/abs/1905.07488) (ICML
2019).

* `TSNPE` from Deistler M, Goncalves P, and Macke J [_Truncated proposals for scalable and hassle-free simulation-based inference_](https://arxiv.org/abs/2210.04815) (NeurIPS 2022).
* `TSNPE` from Deistler M, Goncalves P, and Macke J [_Truncated proposals for scalable
and hassle-free simulation-based inference_](https://arxiv.org/abs/2210.04815)
(NeurIPS 2022).

* [`FMPE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.fmpe.fmpe.FMPE)
from Wildberger, J., Dax, M., Buchholz, S., Green, S., Macke, J. H., & Schölkopf, B.
[_Flow matching for scalable simulation-based
inference_](https://proceedings.neurips.cc/paper_files/paper/2023/hash/3663ae53ec078860bb0b9c6606e092a0-Abstract-Conference.html).
(NeurIPS 2023).

* [`NPSE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.npse.npse.NPSE) from
Geffner, T., Papamakarios, G., & Mnih, A. [_Compositional score modeling for
simulation-based inference_](https://proceedings.mlr.press/v202/geffner23a.html).
(ICML 2023)

### Neural Likelihood Estimation: amortized (NLE) and sequential (SNLE)

* [`SNLE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snle.snle_a.SNLE_A) or just `SNL` from Papamakarios G, Sterrat DC and Murray I [_Sequential
Neural Likelihood_](https://arxiv.org/abs/1805.07226) (AISTATS 2019).
* [`(S)NLE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nle.nle_a.NLE_A)
or just `SNL` from Papamakarios G, Sterrat DC and Murray I [_Sequential Neural
Likelihood_](https://arxiv.org/abs/1805.07226) (AISTATS 2019).

### Neural Ratio Estimation: amortized (NRE) and sequential (SNRE)

* [`(S)NRE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_a.SNRE_A) or `AALR` from Hermans J, Begy V, and Louppe G. [_Likelihood-free Inference with Amortized Approximate Likelihood Ratios_](https://arxiv.org/abs/1903.04057) (ICML 2020).
* [`(S)NRE_A`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nre.nre_a.NRE_A)
or `AALR` from Hermans J, Begy V, and Louppe G. [_Likelihood-free Inference with
Amortized Approximate Likelihood Ratios_](https://arxiv.org/abs/1903.04057) (ICML
2020).

* [`(S)NRE_B`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nre.nre_b.NRE_B)
or `SRE` from Durkan C, Murray I, and Papamakarios G. [_On Contrastive Learning for
Likelihood-free Inference_](https://arxiv.org/abs/2002.03712) (ICML 2020).

* [`(S)NRE_B`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_b.SNRE_B) or `SRE` from Durkan C, Murray I, and Papamakarios G. [_On Contrastive Learning for Likelihood-free Inference_](https://arxiv.org/abs/2002.03712) (ICML 2020).
* [`(S)NRE_C`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nre.nre_c.NRE_C)
or `NRE-C` from Miller BK, Weniger C, Forré P. [_Contrastive Neural Ratio
Estimation_](https://arxiv.org/abs/2210.06170) (NeurIPS 2022).

* [`BNRE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.bnre.BNRE) from Delaunoy A, Hermans J, Rozet F, Wehenkel A, and Louppe G. [_Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation_](https://arxiv.org/abs/2208.13624) (NeurIPS 2022).
* [`BNRE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nre.bnre.BNRE) from
Delaunoy A, Hermans J, Rozet F, Wehenkel A, and Louppe G. [_Towards Reliable
Simulation-Based Inference with Balanced Neural Ratio
Estimation_](https://arxiv.org/abs/2208.13624) (NeurIPS 2022).

* [`(S)NRE_C`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_c.SNRE_C) or `NRE-C` from Miller BK, Weniger C, Forré P. [_Contrastive Neural Ratio Estimation_](https://arxiv.org/abs/2210.06170) (NeurIPS 2022).

### Neural Variational Inference, amortized (NVI) and sequential (SNVI)

* [`SNVI`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.posteriors.vi_posterior) from Glöckler M, Deistler M, Macke J, [_Variational methods for simulation-based inference_](https://openreview.net/forum?id=kZ0UYdhqkNY) (ICLR 2022).
* [`SNVI`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.posteriors.vi_posterior)
from Glöckler M, Deistler M, Macke J, [_Variational methods for simulation-based
inference_](https://openreview.net/forum?id=kZ0UYdhqkNY) (ICLR 2022).

### Mixed Neural Likelihood Estimation (MNLE)

* [`MNLE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snle.mnle.MNLE) from Boelts J, Lueckmann JM, Gao R, Macke J, [_Flexible and efficient simulation-based inference for models of decision-making_](https://elifesciences.org/articles/77220) (eLife 2022).
* [`MNLE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nle.mnle.MNLE) from
Boelts J, Lueckmann JM, Gao R, Macke J, [_Flexible and efficient simulation-based
inference for models of decision-making_](https://elifesciences.org/articles/77220)
(eLife 2022).

## Feedback and Contributions

We welcome any feedback on how `sbi` is working for your inference problems (see
[Discussions](https://github.com/sbi-dev/sbi/discussions)) and are happy to receive bug
reports, pull requests, and other feedback (see
[contribute](http://sbi-dev.github.io/sbi/contribute/)). We wish to maintain a positive
[contribute](https://sbi-dev.github.io/sbi/latest/contribute/)). We wish to maintain a positive
community; please read our [Code of Conduct](CODE_OF_CONDUCT.md).

## Acknowledgments
Expand All @@ -121,15 +189,18 @@ Durkan's `lfi`. `sbi` runs as a community project. See also

`sbi` has been supported by the German Federal Ministry of Education and Research (BMBF)
through project ADIMEM (FKZ 01IS18052 A-D), project SiMaLeSAM (FKZ 01IS21055A) and the
Tübingen AI Center (FKZ 01IS18039A).
Tübingen AI Center (FKZ 01IS18039A). Since 2024, `sbi` is supported by the appliedAI
Institute for Europe.

## License

[Apache License Version 2.0 (Apache-2.0)](https://www.apache.org/licenses/LICENSE-2.0)

## Citation

If you use `sbi` consider citing the [sbi software paper](https://doi.org/10.21105/joss.02505), in addition to the original research articles describing the specific sbi-algorithm(s) you are using.
If you use `sbi` consider citing the [sbi software
paper](https://doi.org/10.21105/joss.02505), in addition to the original research
articles describing the specific sbi-algorithm(s) you are using.

```latex
@article{tejero-cantero2020sbi,
Expand All @@ -146,5 +217,7 @@ If you use `sbi` consider citing the [sbi software paper](https://doi.org/10.211
}
```

The above citation refers to the original version of the `sbi` project and has a persistent DOI.
Additionally, new releases of `sbi` are citable via [Zenodo](https://zenodo.org/record/3993098), where we create a new DOI for every release.
The above citation refers to the original version of the `sbi` project and has a
persistent DOI. Additionally, new releases of `sbi` are citable via
[Zenodo](https://zenodo.org/record/3993098), where we create a new DOI for every
release.
11 changes: 5 additions & 6 deletions docs/docs/faq/question_01_leakage.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,11 @@ This approach will make sampling slower, but samples will not "leak".

- resort to single-round SNPE and (if necessary) increase your simulation budget.

- if your prior is either Gaussian (torch.distributions.MultivariateNormal) or
Uniform (sbi.utils.BoxUniform), you can avoid leakage by using a mixture density
network as density estimator. I.e., using the [flexible
interface](https://sbi-dev.github.io/sbi/tutorial/03_flexible_interface/), set
`density_estimator='mdn'`. When running inference, there should be a print
statement "Using SNPE-C with non-atomic loss".
- if your prior is either Gaussian (torch.distributions.MultivariateNormal) or Uniform
(sbi.utils.BoxUniform), you can avoid leakage by using a mixture density network as
density estimator. I.e., set `density_estimator='mdn'` when creating the `SNPE`
inference object. When running inference, there should be a print statement "Using
SNPE-C with non-atomic loss".

- use a different algorithm, e.g., SNRE and SNLE. Note, however, that these algorithms
can have different issues and potential pitfalls.
2 changes: 1 addition & 1 deletion docs/docs/faq/question_02_nans.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ In cases where a very large fraction of simulations return `NaN` or `inf`,
discarding many simulations can be wasteful. There are two options to deal with
this: Either you use the `RestrictionEstimator` to learn regions in parameter
space that do not produce `NaN` or `inf`, see
[here](https://sbi-dev.github.io/sbi/tutorial/08_restriction_estimator/).
[here](https://sbi-dev.github.io/sbi/latest/tutorials/06_restriction_estimator/).
Alternatively, you can manually substitute the 'invalid' values with a
reasonable replacement. For example, at the end of your simulation code, you
search for invalid entries and replace them with a floating point number.
Expand Down
9 changes: 7 additions & 2 deletions docs/docs/faq/question_03_pickling_error.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,13 @@ def my_simulator(parameters):

You can also write your own code to parallelize simulations with whatever
multiprocessing framework you prefer. You can then simulate your data outside of
`sbi` and pass the simulated data as shown in the [flexible
interface](https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/):
`sbi` and pass the simulated data using `.append_simulations`:

```python
# Given pre-simulated theta and x
trainer = SNPE(prior)
trainer.append_simulations(theta, x).train()
```

## Some more background

Expand Down
5 changes: 1 addition & 4 deletions docs/docs/faq/question_06_resume_training.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,7 @@
# Can I stop neural network training and resume it later?

Many clusters have a time limit, and `sbi` might exceed this limit. You can
circumvent this problem by using the [flexible
interface](https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/). After
simulations are finished, `sbi` trains a neural network. If this process takes
too long, you can stop training and resume it later. The syntax is:
circumvent this problem by stopping and resuming training:

```python
inference = SNPE(prior=prior)
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Then, check out our material:
- :rocket: [__Tutorials and Examples__](tutorials/index.md)
<br/><br/>
*Various examples illustrating how to<br/> [get
started](tutorials/00_getting_started_flexible.md) or use the `sbi` package.*
started](tutorials/00_getting_started.md) or use the `sbi` package.*

- :building_construction: [__Reference API__](reference/index.md)
<br/><br/>
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ inference.
## Introduction

<div class="grid cards" markdown>
- [Getting started](00_getting_started_flexible.md)
- [Getting started](00_getting_started.md)
- [Amortized inference](01_gaussian_amortized.md)
- [More flexibility for training and sampling](18_training_interface.md)
- [Implemented algorithms](16_implemented_methods.md)
Expand Down
4 changes: 2 additions & 2 deletions sbi/inference/trainers/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def infer(
The scope of this function is limited to the most essential features of sbi. For
more flexibility (e.g. multi-round inference, different density estimators) please
use the flexible interface described here:
https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/
https://sbi-dev.github.io/sbi/latest/tutorials/02_multiround_inference/

Args:
simulator: A function that takes parameters $\theta$ and maps them to
Expand Down Expand Up @@ -98,7 +98,7 @@ def infer(
warn(
"We discourage the use the simple interface in more complicated settings. "
"Have a look into the flexible interface, e.g. in our tutorial "
"(https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/).",
"(https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started).",
stacklevel=2,
)
# Set variables to empty dicts to be able to pass them
Expand Down
Loading