|
8 | 8 | "\n",
|
9 | 9 | "For many simulators, the output of the simulator can be ill-defined or it can have non-sensical values. For example, in neuroscience models, if a specific parameter set does not produce a spike, features such as the spike shape can not be computed. When using `sbi`, such simulations that have `NaN` or `inf` in their output are discarded during neural network training. This can lead to inefficetive use of simulation budget: we carry out many simulations, but a potentially large fraction of them is discarded.\n",
|
10 | 10 | "\n",
|
11 |
| - "In this tutorial, we show how we can use `sbi` to learn regions in parameter space that produce `valid` simulation outputs, and thereby improve the sampling efficiency. The key idea of the method is to use a classifier to distinguish parameters that lead to `valid` simulations from regions that lead to `invalid` simulations. After we have obtained the region in parameter space that produes `valid` simulation outputs, we train the deep neural density estimator used in `SNPE`. The method was originally proposed in [Lueckmann, Goncalves et al. 2017](https://arxiv.org/abs/1711.01861) and later used in [Deistler et al. 2021](https://www.biorxiv.org/content/10.1101/2021.07.30.454484v3.abstract).\n" |
| 11 | + "In this tutorial, we show how we can use `sbi` to learn regions in parameter space that produce `valid` simulation outputs, and thereby improve the sampling efficiency. The key idea of the method is to use a classifier to distinguish parameters that lead to `valid` simulations from regions that lead to `invalid` simulations. After we have obtained the region in parameter space that produes `valid` simulation outputs, we train the deep neural density estimator used in `NPE`. The method was originally proposed in [Lueckmann, Goncalves et al. 2017](https://arxiv.org/abs/1711.01861) and later used in [Deistler et al. 2021](https://www.biorxiv.org/content/10.1101/2021.07.30.454484v3.abstract).\n" |
12 | 12 | ]
|
13 | 13 | },
|
14 | 14 | {
|
|
23 | 23 | "metadata": {},
|
24 | 24 | "source": [
|
25 | 25 | "```python\n",
|
26 |
| - "from sbi.inference import SNPE\n", |
| 26 | + "from sbi.inference import NPE\n", |
27 | 27 | "from sbi.utils import RestrictionEstimator\n",
|
28 | 28 | "\n",
|
29 | 29 | "restriction_estimator = RestrictionEstimator(prior=prior)\n",
|
|
40 | 40 | "\n",
|
41 | 41 | "all_theta, all_x, _ = restriction_estimator.get_simulations()\n",
|
42 | 42 | "\n",
|
43 |
| - "inference = SNPE(prior=prior)\n", |
| 43 | + "inference = NPE(prior=prior)\n", |
44 | 44 | "density_estimator = inference.append_simulations(all_theta, all_x).train()\n",
|
45 | 45 | "posterior = inference.build_posterior()\n",
|
46 | 46 | "```\n"
|
|
70 | 70 | "import torch\n",
|
71 | 71 | "\n",
|
72 | 72 | "from sbi.analysis import pairplot\n",
|
73 |
| - "from sbi.inference import SNPE, simulate_for_sbi\n", |
| 73 | + "from sbi.inference import NPE, simulate_for_sbi\n", |
74 | 74 | "from sbi.utils import BoxUniform, RestrictionEstimator\n",
|
75 | 75 | "\n",
|
76 | 76 | "_ = torch.manual_seed(2)"
|
|
293 | 293 | "cell_type": "markdown",
|
294 | 294 | "metadata": {},
|
295 | 295 | "source": [
|
296 |
| - "We can now use **all** simulations and run `SNPE` as always:\n" |
| 296 | + "We can now use **all** simulations and run `NPE` as always:\n" |
297 | 297 | ]
|
298 | 298 | },
|
299 | 299 | {
|
|
350 | 350 | " _,\n",
|
351 | 351 | ") = restriction_estimator.get_simulations() # Get all simulations run so far.\n",
|
352 | 352 | "\n",
|
353 |
| - "inference = SNPE(prior=prior)\n", |
| 353 | + "inference = NPE(prior=prior)\n", |
354 | 354 | "density_estimator = inference.append_simulations(all_theta, all_x).train()\n",
|
355 | 355 | "posterior = inference.build_posterior()\n",
|
356 | 356 | "\n",
|
|
386 | 386 | "name": "python",
|
387 | 387 | "nbconvert_exporter": "python",
|
388 | 388 | "pygments_lexer": "ipython3",
|
389 |
| - "version": "3.12.4" |
| 389 | + "version": "3.10.14" |
390 | 390 | },
|
391 | 391 | "toc": {
|
392 | 392 | "base_numbering": 1,
|
|
0 commit comments