Skip to content

Commit 796801c

Browse files
Rename SN(P)E -> N(P)E in all documentation
1 parent ef54f65 commit 796801c

File tree

7 files changed

+49
-31
lines changed

7 files changed

+49
-31
lines changed
Lines changed: 25 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,40 @@
11
# What should I do when my 'posterior samples are outside the prior support' in SNPE?
22

3-
When working with **multi-round** SNPE, you might have experienced the following
4-
warning:
3+
When working with **multi-round** NPE (i.e., SNPE), you might have experienced the
4+
following warning:
55

66
```python
77
Only x% posterior samples are within the prior support. It may take a long time to
88
collect the remaining 10000 samples. Consider interrupting (Ctrl-C) and switching to
99
'sample_with_mcmc=True'.
1010
```
1111

12-
The reason for this issue is described in more detail
13-
[here](https://arxiv.org/abs/2002.03712) and
12+
The reason for this issue is described in more detail
13+
[here](https://arxiv.org/abs/2210.04815),
14+
[here](https://arxiv.org/abs/2002.03712), and
1415
[here](https://arxiv.org/abs/1905.07488). The following fixes are possible:
1516

17+
- use truncated proposals for SNPE (TSNPE)
18+
```python
19+
from sbi.inference import NPE
20+
from sbi.utils import RestrictedPrior, get_density_thresholder
21+
22+
inference = NPE(prior)
23+
proposal = prior
24+
for _ in range(num_rounds):
25+
theta = proposal.sample((num_sims,))
26+
x = simulator(theta)
27+
_ = inference.append_simulations(theta, x).train(force_first_round_loss=True)
28+
posterior = inference.build_posterior().set_default_x(x_o)
29+
30+
accept_reject_fn = get_density_thresholder(posterior, quantile=1e-4)
31+
proposal = RestrictedPrior(prior, accept_reject_fn, sample_with="rejection")
32+
```
33+
1634
- sample with MCMC: `samples = posterior((num_samples,), x=x_o, sample_with_mcmc=True)`.
1735
This approach will make sampling slower, but samples will not "leak".
1836

19-
- resort to single-round SNPE and (if necessary) increase your simulation budget.
37+
- resort to single-round NPE and (if necessary) increase your simulation budget.
2038

2139
- if your prior is either Gaussian (torch.distributions.MultivariateNormal) or
2240
Uniform (sbi.utils.BoxUniform), you can avoid leakage by using a mixture density
@@ -25,5 +43,5 @@ interface](https://sbi-dev.github.io/sbi/tutorial/03_flexible_interface/), set
2543
`density_estimator='mdn'`. When running inference, there should be a print
2644
statement "Using SNPE-C with non-atomic loss".
2745

28-
- use a different algorithm, e.g., SNRE and SNLE. Note, however, that these algorithms
29-
can have different issues and potential pitfalls.
46+
- use a different algorithm, e.g., Sequential NRE and Sequential NLE. Note, however,
47+
that these algorithms can have different issues and potential pitfalls.

docs/docs/faq/question_02_nans.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,15 @@ Importantly, in order for neural network training work well, the floating point
1616
number should still be in a reasonable range, i.e., maybe a few standard
1717
deviations outside of 'good' values.
1818

19-
If you are running **multi-round** SNPE, however, things can go fully wrong if
19+
If you are running **multi-round** NPE (SNPE), however, things can go fully wrong if
2020
invalid data are encountered. In that case, you will get the following warning
2121

2222
```python
2323
When invalid simulations are excluded, multi-round SNPE-C can leak into the regions
2424
where parameters led to invalid simulations. This can lead to poor results.
2525
```
2626

27-
Hence, if you are running multi-round SNPE and a significant fraction of
27+
Hence, if you are running multi-round NPE and a significant fraction of
2828
simulations returns at least one invalid number, we strongly recommend manually
2929
replacing the value in your simulation code as described above (or resorting to
30-
single-round SNPE, or using a different `sbi` method entirely).
30+
single-round NPE, or using a different `sbi` method entirely).

docs/docs/faq/question_04_gpu.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Yes, we support GPU training. When creating the inference object in the flexible
1010
interface, you can pass the `device` as an argument, e.g.,
1111

1212
```python
13-
inference = SNPE(prior, device="cuda", density_estimator="maf")
13+
inference = NPE(prior, device="cuda", density_estimator="maf")
1414
```
1515

1616
The device is set to `"cpu"` by default. But it can be set to anything, as long
@@ -33,7 +33,7 @@ device="cuda:0"), covariance_matrix=torch.eye(2, device="cuda:0"))
3333

3434
Whether or not you reduce your training time when training on a GPU depends on
3535
the problem at hand. We provide a couple of default density estimators for
36-
`SNPE`, `SNLE` and `SNRE`, e.g., a mixture density network
36+
`NPE`, `NLE` and `NRE`, e.g., a mixture density network
3737
(`density_estimator="mdn"`) or a Masked Autoregressive Flow
3838
(`density_estimator="maf"`). For these default density estimators, we do **not**
3939
expect a speed-up. This is because the underlying neural networks are relatively

docs/docs/faq/question_06_resume_training.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ simulations are finished, `sbi` trains a neural network. If this process takes
88
too long, you can stop training and resume it later. The syntax is:
99

1010
```python
11-
inference = SNPE(prior=prior)
11+
inference = NPE(prior=prior)
1212
inference = inference.append_simulations(theta, x)
1313
inference.train(max_num_epochs=300) # Pick `max_num_epochs` such that it does not exceed the runtime.
1414

docs/docs/faq/question_07_custom_prior.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Note that in `custom_prior_wrapper_kwargs` you can pass additinal arguments for
8181
the wrapper, e.g., `validate_args` or `arg_constraints` see the `Distribution`
8282
documentation for more details.
8383

84-
If you are using `sbi` < 0.17.2 and use `SNLE` the code above will produce a
84+
If you are using `sbi` < 0.17.2 and use `NLE` the code above will produce a
8585
`NotImplementedError` (see [#581](https://github.com/mackelab/sbi/issues/581)).
86-
In this case, you need to update to a newer version of `sbi` or use `SNPE`
86+
In this case, you need to update to a newer version of `sbi` or use `NPE`
8787
instead.

docs/docs/index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ interface:
55

66
```python
77
import torch
8-
from sbi.inference import SNPE
8+
from sbi.inference import NPE
99

1010
# define shifted Gaussian simulator.
1111
def simulator(θ): return θ + torch.randn_like(θ)
@@ -15,7 +15,7 @@ def simulator(θ): return θ + torch.randn_like(θ)
1515
x = simulator(θ)
1616

1717
# choose sbi method and train
18-
inference = SNPE()
18+
inference = NPE()
1919
inference.append_simulations(θ, x).train()
2020

2121
# do inference given observed data
@@ -103,8 +103,8 @@ The methods then proceed by
103103
full space of parameters consistent with the data and the prior, i.e. the
104104
posterior distribution. The posterior assigns high probability to parameters
105105
that are consistent with both the data and the prior, and low probability to
106-
inconsistent parameters. While SNPE directly learns the posterior
107-
distribution, SNLE and SNRE need an extra MCMC sampling step to construct a
106+
inconsistent parameters. While NPE directly learns the posterior
107+
distribution, NLE and NRE need an extra MCMC sampling step to construct a
108108
posterior.
109109
4. If needed, an initial estimate of the posterior can be used to adaptively
110110
generate additional informative simulations.

docs/docs/reference/inference.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,48 +1,48 @@
11
# Inference
22

3-
## Algorithms
3+
## Training algorithms
44

5-
::: sbi.inference.snpe.snpe_a.SNPE_A
5+
::: sbi.inference.trainers.npe.npe_a.NPE_A
66
selection:
77
filters: [ "!^_", "^__", "!^__class__" ]
88
inherited_members: true
99

10-
::: sbi.inference.snpe.snpe_c.SNPE_C
10+
::: sbi.inference.trainers.npe.npe_c.NPE_C
1111
selection:
1212
filters: [ "!^_", "^__", "!^__class__" ]
1313
inherited_members: true
1414

15-
::: sbi.inference.fmpe.fmpe_base.FMPE
15+
::: sbi.inference.trainers.fmpe.fmpe.FMPE
1616
selection:
1717
filters: [ "!^_", "^__", "!^__class__" ]
1818
inherited_members: true
1919

20-
::: sbi.inference.npse.npse.NPSE
20+
::: sbi.inference.trainers.npse.npse.NPSE
2121
selection:
2222
filters: [ "!^_", "^__", "!^__class__" ]
2323
inherited_members: true
2424

25-
::: sbi.inference.snle.snle_a.SNLE_A
25+
::: sbi.inference.trainers.nle.nle_a.NLE_A
2626
selection:
2727
filters: [ "!^_", "^__", "!^__class__" ]
2828
inherited_members: true
2929

30-
::: sbi.inference.snre.snre_a.SNRE_A
30+
::: sbi.inference.trainers.nre.nre_a.NRE_A
3131
selection:
3232
filters: [ "!^_", "^__", "!^__class__" ]
3333
inherited_members: true
3434

35-
::: sbi.inference.snre.snre_b.SNRE_B
35+
::: sbi.inference.trainers.nre.nre_b.NRE_B
3636
selection:
3737
filters: [ "!^_", "^__", "!^__class__" ]
3838
inherited_members: true
3939

40-
::: sbi.inference.snre.snre_c.SNRE_C
40+
::: sbi.inference.trainers.nre.nre_c.NRE_C
4141
selection:
4242
filters: [ "!^_", "^__", "!^__class__" ]
4343
inherited_members: true
4444

45-
::: sbi.inference.snre.bnre.BNRE
45+
::: sbi.inference.trainers.nre.bnre.BNRE
4646
selection:
4747
filters: [ "!^_", "^__", "!^__class__" ]
4848
inherited_members: true
@@ -59,9 +59,9 @@
5959

6060
## Helpers
6161

62-
::: sbi.inference.base.infer
62+
::: sbi.inference.trainers.base.infer
6363

64-
::: sbi.inference.base.simulate_for_sbi
64+
::: sbi.inference.trainers.base.simulate_for_sbi
6565

6666
::: sbi.utils.user_input_checks.process_prior
6767

0 commit comments

Comments
 (0)