From f48cdb974115c62a988de5b6aea4e36d149cc60b Mon Sep 17 00:00:00 2001 From: Jan Boelts Date: Tue, 27 Aug 2024 10:41:41 +0200 Subject: [PATCH 1/3] docs: refactor readme. --- README.md | 170 ++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 121 insertions(+), 49 deletions(-) diff --git a/README.md b/README.md index ebbae2c23..d2f9c708c 100644 --- a/README.md +++ b/README.md @@ -5,48 +5,80 @@ [![GitHub license](https://img.shields.io/github/license/sbi-dev/sbi)](https://github.com/sbi-dev/sbi/blob/master/LICENSE.txt) [![DOI](https://joss.theoj.org/papers/10.21105/joss.02505/status.svg)](https://doi.org/10.21105/joss.02505) -## sbi: simulation-based inference +## `sbi`: Simulation-Based Inference -[Getting Started](https://sbi-dev.github.io/sbi/latest/tutorial/00_getting_started/) | [Documentation](https://sbi-dev.github.io/sbi/) +[Getting Started](https://sbi-dev.github.io/sbi/latest/tutorial/00_getting_started/) | +[Documentation](https://sbi-dev.github.io/sbi/) -`sbi` is a PyTorch package for simulation-based inference. Simulation-based inference is -the process of finding parameters of a simulator from observations. +`sbi` is a Python package for simulation-based inference, designed to meet the needs of +both researchers and practitioners. Whether you need fine-grained control or an +easy-to-use interface, `sbi` has you covered. -`sbi` takes a Bayesian approach and returns a full posterior distribution over the -parameters of the simulator, conditional on the observations. The package implements a -variety of inference algorithms, including _amortized_ and _sequential_ methods. -Amortized methods return a posterior that can be applied to many different observations -without retraining; sequential methods focus the inference on one particular observation -to be more simulation-efficient. See below for an overview of implemented methods. +With `sbi`, you can perform simulation-based inference (SBI) using a Bayesian approach: +Given a simulator that models a real-world process, SBI estimates the full posterior +distribution over the simulator’s parameters based on observed data. This distribution +indicates the most likely parameter values while additionally quantifying uncertainty +and revealing potential interactions between parameters. -`sbi` offers a simple interface for posterior inference in a few lines of code +### Key Features of `sbi` + +`sbi` offers a blend of flexibility and ease of use: + +- **Low-Level Interfaces**: For those who require maximum control over the inference + process, `sbi` provides low-level interfaces that allow you to fine-tune many aspects + of your work. +- **High-Level Interfaces**: If you prefer simplicity and efficiency, `sbi` also offers + high-level interfaces that enable quick and easy implementation of complex inference + tasks. + +In addition, `sbi` supports a wide range of state-of-the-art inference algorithms (see +below for a list of implemented methods): + +- **Amortized Methods**: These methods enable the reuse of posterior estimators across + multiple observations without the need to retrain. +- **Sequential Methods**: These methods focus on individual observations, optimizing the + number of simulations required. + +Beyond inference, `sbi` also provides: + +- **Validation Tools**: Built-in methods to validate and verify the accuracy of your + inferred posteriors. +- **Plotting and Analysis Tools**: Comprehensive functions for visualizing and analyzing + results, helping you interpret the posterior distributions with ease. + +Getting started with `sbi` is straightforward, requiring only a few lines of code: ```python from sbi.inference import SNPE -# import your simulator, define your prior over the parameters -# sample parameters theta and observations x +# Given: parameters theta and corresponding simulations x inference = SNPE(prior=prior) -_ = inference.append_simulations(theta, x).train() +inference.append_simulations(theta, x).train() posterior = inference.build_posterior() ``` -## Installation +### Installation -`sbi` requires Python 3.8 or higher. A GPU is not required, but can lead to speed-up in some cases. We recommend to use a [`conda`](https://docs.conda.io/en/latest/miniconda.html) virtual -environment ([Miniconda installation instructions](https://docs.conda.io/en/latest/miniconda.html)). If `conda` is installed on the system, an environment for installing `sbi` can be created as follows: +`sbi` requires Python 3.9 or higher. While a GPU isn't necessary, it can improve +performance in some cases. We recommend using a virtual environment with +[`conda`](https://docs.conda.io/en/latest/miniconda.html) for an easy setup. -```commandline -# Create an environment for sbi (indicate Python 3.8 or higher); activate it -$ conda create -n sbi_env python=3.10 && conda activate sbi_env -``` +To install `sbi`, follow these steps: -Independent of whether you are using `conda` or not, `sbi` can be installed using `pip`: +1. **Create a Conda Environment** (if using Conda): -```commandline -pip install sbi -``` + ```bash + conda create -n sbi_env python=3.9 && conda activate sbi_env + ``` + +2. **Install `sbi`**: Independent of whether you are using `conda` or not, `sbi` can be + installed using `pip`: -To test the installation, drop into a python prompt and run + ```commandline + pip install sbi + ``` + +3. **Test the installation**: +Open a Python prompt and run ```python from sbi.examples.minimal import simple @@ -59,48 +91,83 @@ print(posterior) If you're new to `sbi`, we recommend starting with our [Getting Started](https://sbi-dev.github.io/sbi/latest/tutorial/00_getting_started/) tutorial. -You can easily access and run these tutorials by opening a -[Codespace](https://docs.github.com/en/codespaces/overview) on this repo. To do -so, click on the green "Code" button and select "Open with Codespaces". This will -provide you with a fully functional environment where you can run the tutorials as -Jupyter notebooks. +You can also access and run these tutorials directly in your browser by opening +[Codespace](https://docs.github.com/en/codespaces/overview). To do so, click the green +“Code” button on the GitHub repository and select “Open with Codespaces.” This provides +a fully functional environment where you can explore sbi through Jupyter notebooks. ## Inference Algorithms -The following inference algorithms are currently available. You can find instructions on how to run each of these methods [here](https://sbi-dev.github.io/sbi/latest/tutorial/16_implemented_methods/). +The following inference algorithms are currently available. You can find instructions on +how to run each of these methods +[here](https://sbi-dev.github.io/sbi/latest/tutorial/16_implemented_methods/). ### Neural Posterior Estimation: amortized (NPE) and sequential (SNPE) -* [`SNPE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snpe.snpe_a.SNPE_A) (including amortized single-round `NPE`) from Papamakarios G and Murray I [_Fast ε-free Inference of Simulation Models with Bayesian Conditional Density Estimation_](https://proceedings.neurips.cc/paper/2016/hash/6aca97005c68f1206823815f66102863-Abstract.html) (NeurIPS 2016). +* [`SNPE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snpe.snpe_a.SNPE_A) + (including amortized single-round `NPE`) from Papamakarios G and Murray I [_Fast + ε-free Inference of Simulation Models with Bayesian Conditional Density + Estimation_](https://proceedings.neurips.cc/paper/2016/hash/6aca97005c68f1206823815f66102863-Abstract.html) + (NeurIPS 2016). + +* [`SNPE_C`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snpe.snpe_c.SNPE_C) + or `APT` from Greenberg D, Nonnenmacher M, and Macke J [_Automatic Posterior + Transformation for likelihood-free inference_](https://arxiv.org/abs/1905.07488) (ICML + 2019). + +* `TSNPE` from Deistler M, Goncalves P, and Macke J [_Truncated proposals for scalable + and hassle-free simulation-based inference_](https://arxiv.org/abs/2210.04815) + (NeurIPS 2022). -* [`SNPE_C`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snpe.snpe_c.SNPE_C) or `APT` from Greenberg D, Nonnenmacher M, and Macke J [_Automatic - Posterior Transformation for likelihood-free - inference_](https://arxiv.org/abs/1905.07488) (ICML 2019). +* [`FMPE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.fmpe.fmpe_base.FMPE) + from Wildberger, J., Dax, M., Buchholz, S., Green, S., Macke, J. H., & Schölkopf, B. + [_Flow matching for scalable simulation-based inference_] + (https://proceedings.neurips.cc/paper_files/paper/2023/hash/3663ae53ec078860bb0b9c6606e092a0-Abstract-Conference.html). + (NeurIPS 2023). -* `TSNPE` from Deistler M, Goncalves P, and Macke J [_Truncated proposals for scalable and hassle-free simulation-based inference_](https://arxiv.org/abs/2210.04815) (NeurIPS 2022). +* [`NPSE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.npse.npse.NPSE) from + Sharrock, L., Simons, J., Liu, S., & Beaumont, M. [_Sequential neural score + estimation: Likelihood-free inference with conditional score based diffusion + models._](https://arxiv.org/abs/2210.04872v3). (ICML 2024) ### Neural Likelihood Estimation: amortized (NLE) and sequential (SNLE) -* [`SNLE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snle.snle_a.SNLE_A) or just `SNL` from Papamakarios G, Sterrat DC and Murray I [_Sequential - Neural Likelihood_](https://arxiv.org/abs/1805.07226) (AISTATS 2019). +* [`SNLE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snle.snle_a.SNLE_A) + or just `SNL` from Papamakarios G, Sterrat DC and Murray I [_Sequential Neural + Likelihood_](https://arxiv.org/abs/1805.07226) (AISTATS 2019). ### Neural Ratio Estimation: amortized (NRE) and sequential (SNRE) -* [`(S)NRE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_a.SNRE_A) or `AALR` from Hermans J, Begy V, and Louppe G. [_Likelihood-free Inference with Amortized Approximate Likelihood Ratios_](https://arxiv.org/abs/1903.04057) (ICML 2020). +* [`(S)NRE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_a.SNRE_A) + or `AALR` from Hermans J, Begy V, and Louppe G. [_Likelihood-free Inference with + Amortized Approximate Likelihood Ratios_](https://arxiv.org/abs/1903.04057) (ICML + 2020). -* [`(S)NRE_B`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_b.SNRE_B) or `SRE` from Durkan C, Murray I, and Papamakarios G. [_On Contrastive Learning for Likelihood-free Inference_](https://arxiv.org/abs/2002.03712) (ICML 2020). +* [`(S)NRE_B`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_b.SNRE_B) + or `SRE` from Durkan C, Murray I, and Papamakarios G. [_On Contrastive Learning for + Likelihood-free Inference_](https://arxiv.org/abs/2002.03712) (ICML 2020). -* [`BNRE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.bnre.BNRE) from Delaunoy A, Hermans J, Rozet F, Wehenkel A, and Louppe G. [_Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation_](https://arxiv.org/abs/2208.13624) (NeurIPS 2022). +* [`BNRE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.bnre.BNRE) from + Delaunoy A, Hermans J, Rozet F, Wehenkel A, and Louppe G. [_Towards Reliable + Simulation-Based Inference with Balanced Neural Ratio + Estimation_](https://arxiv.org/abs/2208.13624) (NeurIPS 2022). -* [`(S)NRE_C`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_c.SNRE_C) or `NRE-C` from Miller BK, Weniger C, Forré P. [_Contrastive Neural Ratio Estimation_](https://arxiv.org/abs/2210.06170) (NeurIPS 2022). +* [`(S)NRE_C`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_c.SNRE_C) + or `NRE-C` from Miller BK, Weniger C, Forré P. [_Contrastive Neural Ratio + Estimation_](https://arxiv.org/abs/2210.06170) (NeurIPS 2022). ### Neural Variational Inference, amortized (NVI) and sequential (SNVI) -* [`SNVI`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.posteriors.vi_posterior) from Glöckler M, Deistler M, Macke J, [_Variational methods for simulation-based inference_](https://openreview.net/forum?id=kZ0UYdhqkNY) (ICLR 2022). +* [`SNVI`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.posteriors.vi_posterior) + from Glöckler M, Deistler M, Macke J, [_Variational methods for simulation-based + inference_](https://openreview.net/forum?id=kZ0UYdhqkNY) (ICLR 2022). ### Mixed Neural Likelihood Estimation (MNLE) -* [`MNLE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snle.mnle.MNLE) from Boelts J, Lueckmann JM, Gao R, Macke J, [_Flexible and efficient simulation-based inference for models of decision-making_](https://elifesciences.org/articles/77220) (eLife 2022). +* [`MNLE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snle.mnle.MNLE) from + Boelts J, Lueckmann JM, Gao R, Macke J, [_Flexible and efficient simulation-based + inference for models of decision-making_](https://elifesciences.org/articles/77220) + (eLife 2022). ## Feedback and Contributions @@ -121,7 +188,8 @@ Durkan's `lfi`. `sbi` runs as a community project. See also `sbi` has been supported by the German Federal Ministry of Education and Research (BMBF) through project ADIMEM (FKZ 01IS18052 A-D), project SiMaLeSAM (FKZ 01IS21055A) and the -Tübingen AI Center (FKZ 01IS18039A). +Tübingen AI Center (FKZ 01IS18039A). Since 2024, `sbi` is supported by the appliedAI +Institute for Europe. ## License @@ -129,7 +197,9 @@ Tübingen AI Center (FKZ 01IS18039A). ## Citation -If you use `sbi` consider citing the [sbi software paper](https://doi.org/10.21105/joss.02505), in addition to the original research articles describing the specific sbi-algorithm(s) you are using. +If you use `sbi` consider citing the [sbi software +paper](https://doi.org/10.21105/joss.02505), in addition to the original research +articles describing the specific sbi-algorithm(s) you are using. ```latex @article{tejero-cantero2020sbi, @@ -146,5 +216,7 @@ If you use `sbi` consider citing the [sbi software paper](https://doi.org/10.211 } ``` -The above citation refers to the original version of the `sbi` project and has a persistent DOI. -Additionally, new releases of `sbi` are citable via [Zenodo](https://zenodo.org/record/3993098), where we create a new DOI for every release. +The above citation refers to the original version of the `sbi` project and has a +persistent DOI. Additionally, new releases of `sbi` are citable via +[Zenodo](https://zenodo.org/record/3993098), where we create a new DOI for every +release. From 9b3194d602a3608c6e93a05fe2c508d58a765a3a Mon Sep 17 00:00:00 2001 From: Jan Boelts Date: Tue, 27 Aug 2024 12:01:25 +0200 Subject: [PATCH 2/3] fix: update readme and faq to new tutorials folder. --- README.md | 15 +++++++-------- docs/docs/faq/question_01_leakage.md | 11 +++++------ docs/docs/faq/question_02_nans.md | 2 +- docs/docs/faq/question_03_pickling_error.md | 9 +++++++-- docs/docs/faq/question_06_resume_training.md | 5 +---- docs/docs/index.md | 2 +- docs/docs/tutorials/index.md | 2 +- sbi/inference/trainers/base.py | 4 ++-- ...ed_flexible.ipynb => 00_getting_started.ipynb} | 4 ++-- tutorials/01_gaussian_amortized.ipynb | 2 +- tutorials/04_embedding_networks.ipynb | 2 +- tutorials/05_conditional_distributions.ipynb | 4 ++-- tutorials/18_training_interface.ipynb | 2 +- tutorials/Example_01_DecisionMakingModel.ipynb | 2 +- 14 files changed, 33 insertions(+), 33 deletions(-) rename tutorials/{00_getting_started_flexible.ipynb => 00_getting_started.ipynb} (99%) diff --git a/README.md b/README.md index d2f9c708c..0a6e9961d 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ ## `sbi`: Simulation-Based Inference -[Getting Started](https://sbi-dev.github.io/sbi/latest/tutorial/00_getting_started/) | +[Getting Started](https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started/) | [Documentation](https://sbi-dev.github.io/sbi/) `sbi` is a Python package for simulation-based inference, designed to meet the needs of @@ -26,7 +26,7 @@ and revealing potential interactions between parameters. - **Low-Level Interfaces**: For those who require maximum control over the inference process, `sbi` provides low-level interfaces that allow you to fine-tune many aspects - of your work. + of your workflow. - **High-Level Interfaces**: If you prefer simplicity and efficiency, `sbi` also offers high-level interfaces that enable quick and easy implementation of complex inference tasks. @@ -89,18 +89,18 @@ print(posterior) ## Tutorials If you're new to `sbi`, we recommend starting with our [Getting -Started](https://sbi-dev.github.io/sbi/latest/tutorial/00_getting_started/) tutorial. +Started](https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started/) tutorial. You can also access and run these tutorials directly in your browser by opening [Codespace](https://docs.github.com/en/codespaces/overview). To do so, click the green “Code” button on the GitHub repository and select “Open with Codespaces.” This provides -a fully functional environment where you can explore sbi through Jupyter notebooks. +a fully functional environment where you can explore `sbi` through Jupyter notebooks. ## Inference Algorithms The following inference algorithms are currently available. You can find instructions on how to run each of these methods -[here](https://sbi-dev.github.io/sbi/latest/tutorial/16_implemented_methods/). +[here](https://sbi-dev.github.io/sbi/latest/tutorials/16_implemented_methods/). ### Neural Posterior Estimation: amortized (NPE) and sequential (SNPE) @@ -126,9 +126,8 @@ how to run each of these methods (NeurIPS 2023). * [`NPSE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.npse.npse.NPSE) from - Sharrock, L., Simons, J., Liu, S., & Beaumont, M. [_Sequential neural score - estimation: Likelihood-free inference with conditional score based diffusion - models._](https://arxiv.org/abs/2210.04872v3). (ICML 2024) + Geffner, T., Papamakarios, G., & Mnih, A. [_Compositional score modeling + for simulation-based inference_]. (ICML 2023) ### Neural Likelihood Estimation: amortized (NLE) and sequential (SNLE) diff --git a/docs/docs/faq/question_01_leakage.md b/docs/docs/faq/question_01_leakage.md index cce931b39..f08bb852c 100644 --- a/docs/docs/faq/question_01_leakage.md +++ b/docs/docs/faq/question_01_leakage.md @@ -18,12 +18,11 @@ This approach will make sampling slower, but samples will not "leak". - resort to single-round SNPE and (if necessary) increase your simulation budget. -- if your prior is either Gaussian (torch.distributions.MultivariateNormal) or -Uniform (sbi.utils.BoxUniform), you can avoid leakage by using a mixture density -network as density estimator. I.e., using the [flexible -interface](https://sbi-dev.github.io/sbi/tutorial/03_flexible_interface/), set -`density_estimator='mdn'`. When running inference, there should be a print -statement "Using SNPE-C with non-atomic loss". +- if your prior is either Gaussian (torch.distributions.MultivariateNormal) or Uniform +(sbi.utils.BoxUniform), you can avoid leakage by using a mixture density network as +density estimator. I.e., set `density_estimator='mdn'` when creating the `SNPE` +inference object. When running inference, there should be a print statement "Using +SNPE-C with non-atomic loss". - use a different algorithm, e.g., SNRE and SNLE. Note, however, that these algorithms can have different issues and potential pitfalls. diff --git a/docs/docs/faq/question_02_nans.md b/docs/docs/faq/question_02_nans.md index f65a25a5e..84ba902c6 100644 --- a/docs/docs/faq/question_02_nans.md +++ b/docs/docs/faq/question_02_nans.md @@ -8,7 +8,7 @@ In cases where a very large fraction of simulations return `NaN` or `inf`, discarding many simulations can be wasteful. There are two options to deal with this: Either you use the `RestrictionEstimator` to learn regions in parameter space that do not produce `NaN` or `inf`, see -[here](https://sbi-dev.github.io/sbi/tutorial/08_restriction_estimator/). +[here](https://sbi-dev.github.io/sbi/latest/tutorials/06_restriction_estimator/). Alternatively, you can manually substitute the 'invalid' values with a reasonable replacement. For example, at the end of your simulation code, you search for invalid entries and replace them with a floating point number. diff --git a/docs/docs/faq/question_03_pickling_error.md b/docs/docs/faq/question_03_pickling_error.md index 2147965cd..49419a4c9 100644 --- a/docs/docs/faq/question_03_pickling_error.md +++ b/docs/docs/faq/question_03_pickling_error.md @@ -40,8 +40,13 @@ def my_simulator(parameters): You can also write your own code to parallelize simulations with whatever multiprocessing framework you prefer. You can then simulate your data outside of -`sbi` and pass the simulated data as shown in the [flexible -interface](https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/): +`sbi` and pass the simulated data using `.append_simulations`: + +```python +# Given pre-simulated theta and x +trainer = SNPE(prior) +trainer.append_simulations(theta, x).train() +``` ## Some more background diff --git a/docs/docs/faq/question_06_resume_training.md b/docs/docs/faq/question_06_resume_training.md index 5522b6930..0b62cb3b9 100644 --- a/docs/docs/faq/question_06_resume_training.md +++ b/docs/docs/faq/question_06_resume_training.md @@ -2,10 +2,7 @@ # Can I stop neural network training and resume it later? Many clusters have a time limit, and `sbi` might exceed this limit. You can -circumvent this problem by using the [flexible -interface](https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/). After -simulations are finished, `sbi` trains a neural network. If this process takes -too long, you can stop training and resume it later. The syntax is: +circumvent this problem by stopping and resuming training: ```python inference = SNPE(prior=prior) diff --git a/docs/docs/index.md b/docs/docs/index.md index 9c7c1d5d0..9dac6f849 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -45,7 +45,7 @@ Then, check out our material: - :rocket: [__Tutorials and Examples__](tutorials/index.md)

*Various examples illustrating how to
[get - started](tutorials/00_getting_started_flexible.md) or use the `sbi` package.* + started](tutorials/00_getting_started.md) or use the `sbi` package.* - :building_construction: [__Reference API__](reference/index.md)

diff --git a/docs/docs/tutorials/index.md b/docs/docs/tutorials/index.md index 8efe49c90..ef0609c6c 100644 --- a/docs/docs/tutorials/index.md +++ b/docs/docs/tutorials/index.md @@ -15,7 +15,7 @@ inference. ## Introduction
-- [Getting started](00_getting_started_flexible.md) +- [Getting started](00_getting_started.md) - [Amortized inference](01_gaussian_amortized.md) - [More flexibility for training and sampling](18_training_interface.md) - [Implemented algorithms](16_implemented_methods.md) diff --git a/sbi/inference/trainers/base.py b/sbi/inference/trainers/base.py index b4ca20f99..70351cf1c 100644 --- a/sbi/inference/trainers/base.py +++ b/sbi/inference/trainers/base.py @@ -57,7 +57,7 @@ def infer( The scope of this function is limited to the most essential features of sbi. For more flexibility (e.g. multi-round inference, different density estimators) please use the flexible interface described here: - https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/ + https://sbi-dev.github.io/sbi/latest/tutorials/02_multiround_inference/ Args: simulator: A function that takes parameters $\theta$ and maps them to @@ -98,7 +98,7 @@ def infer( warn( "We discourage the use the simple interface in more complicated settings. " "Have a look into the flexible interface, e.g. in our tutorial " - "(https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/).", + "(https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started).", stacklevel=2, ) # Set variables to empty dicts to be able to pass them diff --git a/tutorials/00_getting_started_flexible.ipynb b/tutorials/00_getting_started.ipynb similarity index 99% rename from tutorials/00_getting_started_flexible.ipynb rename to tutorials/00_getting_started.ipynb index 8dae1001d..40bdbc29d 100644 --- a/tutorials/00_getting_started_flexible.ipynb +++ b/tutorials/00_getting_started.ipynb @@ -11,7 +11,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Note, you can find the original version of this notebook at [/tutorials/00_getting_started_flexible.ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/00_getting_started_flexible.ipynb) in the `sbi` repository." + "Note, you can find the original version of this notebook at [/tutorials/00_getting_started.ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/00_getting_started.ipynb) in the `sbi` repository." ] }, { @@ -137,7 +137,7 @@ "source": [ "> Note: In the `sbi` toolbox, NPE is run by using the `SNPE` (Sequential NPE) class for only one iteration of simulation and training. \n", "\n", - "> Note: This is where you could specify an alternative inference object such as (S)NRE for ratio estimation or (S)NLE for likelihood estimation. Here, you can see [all implemented methods](https://sbi-dev.github.io/sbi/tutorial/16_implemented_methods/)." + "> Note: This is where you could specify an alternative inference object such as (S)NRE for ratio estimation or (S)NLE for likelihood estimation. Here, you can see [all implemented methods](https://sbi-dev.github.io/sbi/latest/tutorials/16_implemented_methods/)." ] }, { diff --git a/tutorials/01_gaussian_amortized.ipynb b/tutorials/01_gaussian_amortized.ipynb index 504891cd9..0d2392070 100644 --- a/tutorials/01_gaussian_amortized.ipynb +++ b/tutorials/01_gaussian_amortized.ipynb @@ -20,7 +20,7 @@ "source": [ "In this tutorial, we introduce **amortization** that is the capability to evaluate the posterior for different observations without having to re-run inference.\n", "\n", - "We will demonstrate how `sbi` can infer an amortized posterior for the illustrative linear Gaussian example introduced in [Getting Started](https://sbi-dev.github.io/sbi/tutorial/00_getting_started_flexible/), that takes in 3 parameters ($\\theta$). " + "We will demonstrate how `sbi` can infer an amortized posterior for the illustrative linear Gaussian example introduced in [Getting Started](https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started/), that takes in 3 parameters ($\\theta$). " ] }, { diff --git a/tutorials/04_embedding_networks.ipynb b/tutorials/04_embedding_networks.ipynb index f66ba5238..9e4a116e2 100644 --- a/tutorials/04_embedding_networks.ipynb +++ b/tutorials/04_embedding_networks.ipynb @@ -250,7 +250,7 @@ "\n", "- Fully-connected multi-layer perceptron\n", "- Convolutional neural network (1D and 2D convolutions)\n", - "- Permutation-invariant neural network (for trial-based data, see [here](https://sbi-dev.github.io/sbi/tutorial/14_iid_data_and_permutation_invariant_embeddings/))\n", + "- Permutation-invariant neural network (for trial-based data, see [here](https://sbi-dev.github.io/sbi/latest/tutorials/12_iid_data_and_permutation_invariant_embeddings/))\n", "\n", "In the example considered here, the most appropriate `embedding_net` would be a CNN for two-dimensional images. We can setup it as per:\n" ] diff --git a/tutorials/05_conditional_distributions.ipynb b/tutorials/05_conditional_distributions.ipynb index 6f2a318be..d1854015d 100644 --- a/tutorials/05_conditional_distributions.ipynb +++ b/tutorials/05_conditional_distributions.ipynb @@ -26204,7 +26204,7 @@ "source": [ "## Sampling conditional distributions\n", "\n", - "So far, we have demonstrated how one can plot 2D conditional distributions with `conditional_pairplot()` and how one can compute the pairwise conditional correlation coefficient with `conditional_corrcoeff()`. In some cases, it can be useful to keep a subset of parameters fixed and to vary **more than two** parameters. This can be done by sampling the conditonal posterior $p(\\theta_i | \\theta_{j \\neq i}, x_o)$. As of `sbi` `v0.18.0`, this functionality requires using the [sampler interface](https://sbi-dev.github.io/sbi/tutorial/11_sampler_interface/). In this tutorial, we demonstrate this functionality on a linear gaussian simulator with four parameters. We would like to fix the forth parameter to $\\theta_4=0.2$ and sample the first three parameters given that value, i.e. we want to sample $p(\\theta_1, \\theta_2, \\theta_3 | \\theta_4 = 0.2, x_o)$. For an application in neuroscience, see [Deistler, Gonçalves, Macke, 2021](https://www.biorxiv.org/content/10.1101/2021.07.30.454484v4.abstract).\n" + "So far, we have demonstrated how one can plot 2D conditional distributions with `conditional_pairplot()` and how one can compute the pairwise conditional correlation coefficient with `conditional_corrcoeff()`. In some cases, it can be useful to keep a subset of parameters fixed and to vary **more than two** parameters. This can be done by sampling the conditonal posterior $p(\\theta_i | \\theta_{j \\neq i}, x_o)$. As of `sbi` `v0.18.0`, this functionality requires using the [sampler interface](https://sbi-dev.github.io/sbi/latest/tutorials/09_sampler_interface/). In this tutorial, we demonstrate this functionality on a linear gaussian simulator with four parameters. We would like to fix the forth parameter to $\\theta_4=0.2$ and sample the first three parameters given that value, i.e. we want to sample $p(\\theta_1, \\theta_2, \\theta_3 | \\theta_4 = 0.2, x_o)$. For an application in neuroscience, see [Deistler, Gonçalves, Macke, 2021](https://www.biorxiv.org/content/10.1101/2021.07.30.454484v4.abstract).\n" ] }, { @@ -26280,7 +26280,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now we want to build the conditional potential (please read throught the tutorial [09_sampler_interface](https://www.mackelab.org/sbi/tutorial/09_sampler_interface/) for an explanation of potential functions). For this, we have to pass a `condition`. In our case, we want to condition the forth parameter on $\\theta_4=0.2$. Regardless of how many parameters one wants to condition on, in `sbi`, one has to pass a `condition` value for all parameters. The first three values will simply be ignored. We can tell the algorithm which parameters should be kept fixed and which ones should be sampled with the argument `dims_to_sample`.\n" + "Now we want to build the conditional potential (please read through the tutorial [09_sampler_interface](https://sbi-dev.github.io/sbi/dev/tutorials/09_sampler_interface/) for an explanation of potential functions). For this, we have to pass a `condition`. In our case, we want to condition the forth parameter on $\\theta_4=0.2$. Regardless of how many parameters one wants to condition on, in `sbi`, one has to pass a `condition` value for all parameters. The first three values will simply be ignored. We can tell the algorithm which parameters should be kept fixed and which ones should be sampled with the argument `dims_to_sample`.\n" ] }, { diff --git a/tutorials/18_training_interface.ipynb b/tutorials/18_training_interface.ipynb index e2408b89b..928027b34 100644 --- a/tutorials/18_training_interface.ipynb +++ b/tutorials/18_training_interface.ipynb @@ -240,7 +240,7 @@ "id": "3f1cab68-4196-47ee-a4c2-3ed2cd8a454b", "metadata": {}, "source": [ - "You can also wrap the `DensityEstimator` as a `DirectPosterior`. The `DirectPosterior` is also returned by `inference.build_posterior` and you have already learned how to use it in the [introduction tutorial](https://sbi-dev.github.io/sbi/dev/tutorials/00_getting_started_flexible/) and the [amortization tutotrial](https://sbi-dev.github.io/sbi/dev/tutorials/01_gaussian_amortized/). It adds the following functionality over the raw `DensityEstimator`:\n", + "You can also wrap the `DensityEstimator` as a `DirectPosterior`. The `DirectPosterior` is also returned by `inference.build_posterior` and you have already learned how to use it in the [introduction tutorial](https://sbi-dev.github.io/sbi/dev/tutorials/00_getting_started/) and the [amortization tutotrial](https://sbi-dev.github.io/sbi/dev/tutorials/01_gaussian_amortized/). It adds the following functionality over the raw `DensityEstimator`:\n", "\n", "- automatically reject samples outside of the prior bounds \n", "- compute the Maximum-a-posteriori (MAP) estimate\n", diff --git a/tutorials/Example_01_DecisionMakingModel.ipynb b/tutorials/Example_01_DecisionMakingModel.ipynb index f05a7df20..0eedfeaca 100644 --- a/tutorials/Example_01_DecisionMakingModel.ipynb +++ b/tutorials/Example_01_DecisionMakingModel.ipynb @@ -7,7 +7,7 @@ "# SBI for decision-making models\n", "\n", "In [a previous\n", - "tutorial](https://sbi-dev.github.io/sbi/tutorial/14_iid_data_and_permutation_invariant_embedding.md),\n", + "tutorial](https://sbi-dev.github.io/sbi/latest/tutorials/12_iid_data_and_permutation_invariant_embeddings.md),\n", "we showed how to use SBI with trial-based iid data. Such scenarios can arise,\n", "for example, in models of perceptual decision making. In addition to trial-based\n", "iid data points, these models often come with mixed data types and varying\n", From bdd535ce42845f1e8caba31b18361f6ea0768eab Mon Sep 17 00:00:00 2001 From: Jan Boelts Date: Wed, 28 Aug 2024 13:45:32 +0200 Subject: [PATCH 3/3] fix algorithm names and ref links --- README.md | 38 ++++++++++++++++++++------------------ 1 file changed, 20 insertions(+), 18 deletions(-) diff --git a/README.md b/README.md index 0a6e9961d..de53aff37 100644 --- a/README.md +++ b/README.md @@ -104,13 +104,13 @@ how to run each of these methods ### Neural Posterior Estimation: amortized (NPE) and sequential (SNPE) -* [`SNPE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snpe.snpe_a.SNPE_A) +* [`(S)NPE_A`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.npe.npe_a.NPE_A) (including amortized single-round `NPE`) from Papamakarios G and Murray I [_Fast ε-free Inference of Simulation Models with Bayesian Conditional Density Estimation_](https://proceedings.neurips.cc/paper/2016/hash/6aca97005c68f1206823815f66102863-Abstract.html) (NeurIPS 2016). -* [`SNPE_C`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snpe.snpe_c.SNPE_C) +* [`(S)NPE_C`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.npe.npe_c.NPE_C) or `APT` from Greenberg D, Nonnenmacher M, and Macke J [_Automatic Posterior Transformation for likelihood-free inference_](https://arxiv.org/abs/1905.07488) (ICML 2019). @@ -119,51 +119,53 @@ how to run each of these methods and hassle-free simulation-based inference_](https://arxiv.org/abs/2210.04815) (NeurIPS 2022). -* [`FMPE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.fmpe.fmpe_base.FMPE) +* [`FMPE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.fmpe.fmpe.FMPE) from Wildberger, J., Dax, M., Buchholz, S., Green, S., Macke, J. H., & Schölkopf, B. - [_Flow matching for scalable simulation-based inference_] - (https://proceedings.neurips.cc/paper_files/paper/2023/hash/3663ae53ec078860bb0b9c6606e092a0-Abstract-Conference.html). + [_Flow matching for scalable simulation-based + inference_](https://proceedings.neurips.cc/paper_files/paper/2023/hash/3663ae53ec078860bb0b9c6606e092a0-Abstract-Conference.html). (NeurIPS 2023). -* [`NPSE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.npse.npse.NPSE) from - Geffner, T., Papamakarios, G., & Mnih, A. [_Compositional score modeling - for simulation-based inference_]. (ICML 2023) +* [`NPSE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.npse.npse.NPSE) from + Geffner, T., Papamakarios, G., & Mnih, A. [_Compositional score modeling for + simulation-based inference_](https://proceedings.mlr.press/v202/geffner23a.html). + (ICML 2023) ### Neural Likelihood Estimation: amortized (NLE) and sequential (SNLE) -* [`SNLE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snle.snle_a.SNLE_A) +* [`(S)NLE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nle.nle_a.NLE_A) or just `SNL` from Papamakarios G, Sterrat DC and Murray I [_Sequential Neural Likelihood_](https://arxiv.org/abs/1805.07226) (AISTATS 2019). ### Neural Ratio Estimation: amortized (NRE) and sequential (SNRE) -* [`(S)NRE_A`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_a.SNRE_A) +* [`(S)NRE_A`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nre.nre_a.NRE_A) or `AALR` from Hermans J, Begy V, and Louppe G. [_Likelihood-free Inference with Amortized Approximate Likelihood Ratios_](https://arxiv.org/abs/1903.04057) (ICML 2020). -* [`(S)NRE_B`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_b.SNRE_B) +* [`(S)NRE_B`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nre.nre_b.NRE_B) or `SRE` from Durkan C, Murray I, and Papamakarios G. [_On Contrastive Learning for Likelihood-free Inference_](https://arxiv.org/abs/2002.03712) (ICML 2020). -* [`BNRE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.bnre.BNRE) from +* [`(S)NRE_C`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nre.nre_c.NRE_C) + or `NRE-C` from Miller BK, Weniger C, Forré P. [_Contrastive Neural Ratio + Estimation_](https://arxiv.org/abs/2210.06170) (NeurIPS 2022). + +* [`BNRE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nre.bnre.BNRE) from Delaunoy A, Hermans J, Rozet F, Wehenkel A, and Louppe G. [_Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation_](https://arxiv.org/abs/2208.13624) (NeurIPS 2022). -* [`(S)NRE_C`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snre.snre_c.SNRE_C) - or `NRE-C` from Miller BK, Weniger C, Forré P. [_Contrastive Neural Ratio - Estimation_](https://arxiv.org/abs/2210.06170) (NeurIPS 2022). ### Neural Variational Inference, amortized (NVI) and sequential (SNVI) -* [`SNVI`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.posteriors.vi_posterior) +* [`SNVI`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.posteriors.vi_posterior) from Glöckler M, Deistler M, Macke J, [_Variational methods for simulation-based inference_](https://openreview.net/forum?id=kZ0UYdhqkNY) (ICLR 2022). ### Mixed Neural Likelihood Estimation (MNLE) -* [`MNLE`](https://sbi-dev.github.io/sbi/reference/#sbi.inference.snle.mnle.MNLE) from +* [`MNLE`](https://sbi-dev.github.io/sbi/latest/reference/#sbi.inference.trainers.nle.mnle.MNLE) from Boelts J, Lueckmann JM, Gao R, Macke J, [_Flexible and efficient simulation-based inference for models of decision-making_](https://elifesciences.org/articles/77220) (eLife 2022). @@ -173,7 +175,7 @@ how to run each of these methods We welcome any feedback on how `sbi` is working for your inference problems (see [Discussions](https://github.com/sbi-dev/sbi/discussions)) and are happy to receive bug reports, pull requests, and other feedback (see -[contribute](http://sbi-dev.github.io/sbi/contribute/)). We wish to maintain a positive +[contribute](https://sbi-dev.github.io/sbi/latest/contribute/)). We wish to maintain a positive community; please read our [Code of Conduct](CODE_OF_CONDUCT.md). ## Acknowledgments