Skip to content

Commit 056b18c

Browse files
committed
fix: update readme and faq to new tutorials folder.
1 parent d1bf8cf commit 056b18c

13 files changed

+29
-27
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
## `sbi`: Simulation-Based Inference
99

10-
[Getting Started](https://sbi-dev.github.io/sbi/latest/tutorial/00_getting_started/) |
10+
[Getting Started](https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started/) |
1111
[Documentation](https://sbi-dev.github.io/sbi/)
1212

1313
`sbi` is a Python package for simulation-based inference, designed to meet the needs of
@@ -89,7 +89,7 @@ print(posterior)
8989
## Tutorials
9090

9191
If you're new to `sbi`, we recommend starting with our [Getting
92-
Started](https://sbi-dev.github.io/sbi/latest/tutorial/00_getting_started/) tutorial.
92+
Started](https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started/) tutorial.
9393

9494
You can also access and run these tutorials directly in your browser by opening
9595
[Codespace](https://docs.github.com/en/codespaces/overview). To do so, click the green
@@ -100,7 +100,7 @@ a fully functional environment where you can explore sbi through Jupyter noteboo
100100

101101
The following inference algorithms are currently available. You can find instructions on
102102
how to run each of these methods
103-
[here](https://sbi-dev.github.io/sbi/latest/tutorial/16_implemented_methods/).
103+
[here](https://sbi-dev.github.io/sbi/latest/tutorials/16_implemented_methods/).
104104

105105
### Neural Posterior Estimation: amortized (NPE) and sequential (SNPE)
106106

docs/docs/faq/question_01_leakage.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,11 @@ This approach will make sampling slower, but samples will not "leak".
1818

1919
- resort to single-round SNPE and (if necessary) increase your simulation budget.
2020

21-
- if your prior is either Gaussian (torch.distributions.MultivariateNormal) or
22-
Uniform (sbi.utils.BoxUniform), you can avoid leakage by using a mixture density
23-
network as density estimator. I.e., using the [flexible
24-
interface](https://sbi-dev.github.io/sbi/tutorial/03_flexible_interface/), set
25-
`density_estimator='mdn'`. When running inference, there should be a print
26-
statement "Using SNPE-C with non-atomic loss".
21+
- if your prior is either Gaussian (torch.distributions.MultivariateNormal) or Uniform
22+
(sbi.utils.BoxUniform), you can avoid leakage by using a mixture density network as
23+
density estimator. I.e., set `density_estimator='mdn'` when creating the `SNPE`
24+
inference object. When running inference, there should be a print statement "Using
25+
SNPE-C with non-atomic loss".
2726

2827
- use a different algorithm, e.g., SNRE and SNLE. Note, however, that these algorithms
2928
can have different issues and potential pitfalls.

docs/docs/faq/question_02_nans.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ In cases where a very large fraction of simulations return `NaN` or `inf`,
88
discarding many simulations can be wasteful. There are two options to deal with
99
this: Either you use the `RestrictionEstimator` to learn regions in parameter
1010
space that do not produce `NaN` or `inf`, see
11-
[here](https://sbi-dev.github.io/sbi/tutorial/08_restriction_estimator/).
11+
[here](https://sbi-dev.github.io/sbi/latest/tutorials/06_restriction_estimator/).
1212
Alternatively, you can manually substitute the 'invalid' values with a
1313
reasonable replacement. For example, at the end of your simulation code, you
1414
search for invalid entries and replace them with a floating point number.

docs/docs/faq/question_03_pickling_error.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,8 +40,13 @@ def my_simulator(parameters):
4040

4141
You can also write your own code to parallelize simulations with whatever
4242
multiprocessing framework you prefer. You can then simulate your data outside of
43-
`sbi` and pass the simulated data as shown in the [flexible
44-
interface](https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/):
43+
`sbi` and pass the simulated data using `.append_simulations`:
44+
45+
```python
46+
# Given pre-simulated theta and x
47+
trainer = SNPE(prior)
48+
trainer.append_simulations(theta, x).train()
49+
```
4550

4651
## Some more background
4752

docs/docs/faq/question_06_resume_training.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,7 @@
22
# Can I stop neural network training and resume it later?
33

44
Many clusters have a time limit, and `sbi` might exceed this limit. You can
5-
circumvent this problem by using the [flexible
6-
interface](https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/). After
7-
simulations are finished, `sbi` trains a neural network. If this process takes
8-
too long, you can stop training and resume it later. The syntax is:
5+
circumvent this problem by stopping and resuming training:
96

107
```python
118
inference = SNPE(prior=prior)

docs/docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Then, check out our material:
4545
- :rocket: [__Tutorials and Examples__](tutorials/index.md)
4646
<br/><br/>
4747
*Various examples illustrating how to<br/> [get
48-
started](tutorials/00_getting_started_flexible.md) or use the `sbi` package.*
48+
started](tutorials/00_getting_started.md) or use the `sbi` package.*
4949

5050
- :building_construction: [__Reference API__](reference/index.md)
5151
<br/><br/>

sbi/inference/base.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ def infer(
5757
The scope of this function is limited to the most essential features of sbi. For
5858
more flexibility (e.g. multi-round inference, different density estimators) please
5959
use the flexible interface described here:
60-
https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/
60+
https://sbi-dev.github.io/sbi/latest/tutorials/02_multiround_inference/
6161
6262
Args:
6363
simulator: A function that takes parameters $\theta$ and maps them to
@@ -98,7 +98,7 @@ def infer(
9898
warn(
9999
"We discourage the use the simple interface in more complicated settings. "
100100
"Have a look into the flexible interface, e.g. in our tutorial "
101-
"(https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/).",
101+
"(https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started).",
102102
stacklevel=2,
103103
)
104104
# Set variables to empty dicts to be able to pass them

tutorials/00_getting_started_flexible.ipynb renamed to tutorials/00_getting_started.ipynb

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,8 @@
1111
"cell_type": "markdown",
1212
"metadata": {},
1313
"source": [
14-
"Note, you can find the original version of this notebook at [/tutorials/00_getting_started_flexible.ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/00_getting_started_flexible.ipynb) in the `sbi` repository."
14+
"Note, you can find the original version of this notebook at [/tutorials/00_getting_started\n",
15+
".ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/00_getting_started.ipynb) in the `sbi` repository."
1516
]
1617
},
1718
{
@@ -137,7 +138,7 @@
137138
"source": [
138139
"> Note: In the `sbi` toolbox, NPE is run by using the `SNPE` (Sequential NPE) class for only one iteration of simulation and training. \n",
139140
"\n",
140-
"> Note: This is where you could specify an alternative inference object such as (S)NRE for ratio estimation or (S)NLE for likelihood estimation. Here, you can see [all implemented methods](https://sbi-dev.github.io/sbi/tutorial/16_implemented_methods/)."
141+
"> Note: This is where you could specify an alternative inference object such as (S)NRE for ratio estimation or (S)NLE for likelihood estimation. Here, you can see [all implemented methods](https://sbi-dev.github.io/sbi/latest/tutorials/16_implemented_methods/)."
141142
]
142143
},
143144
{

tutorials/01_gaussian_amortized.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
"source": [
2121
"In this tutorial, we introduce **amortization** that is the capability to evaluate the posterior for different observations without having to re-run inference.\n",
2222
"\n",
23-
"We will demonstrate how `sbi` can infer an amortized posterior for the illustrative linear Gaussian example introduced in [Getting Started](https://sbi-dev.github.io/sbi/tutorial/00_getting_started_flexible/), that takes in 3 parameters ($\\theta$). "
23+
"We will demonstrate how `sbi` can infer an amortized posterior for the illustrative linear Gaussian example introduced in [Getting Started](https://sbi-dev.github.io/sbi/latest/tutorials/00_getting_started/), that takes in 3 parameters ($\\theta$). "
2424
]
2525
},
2626
{

tutorials/04_embedding_networks.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -250,7 +250,7 @@
250250
"\n",
251251
"- Fully-connected multi-layer perceptron\n",
252252
"- Convolutional neural network (1D and 2D convolutions)\n",
253-
"- Permutation-invariant neural network (for trial-based data, see [here](https://sbi-dev.github.io/sbi/tutorial/14_iid_data_and_permutation_invariant_embeddings/))\n",
253+
"- Permutation-invariant neural network (for trial-based data, see [here](https://sbi-dev.github.io/sbi/latest/tutorials/12_iid_data_and_permutation_invariant_embeddings/))\n",
254254
"\n",
255255
"In the example considered here, the most appropriate `embedding_net` would be a CNN for two-dimensional images. We can setup it as per:\n"
256256
]

0 commit comments

Comments
 (0)