Skip to content

Sensitivity analysis - access the jacobian #1553

@MolinAlexei

Description

@MolinAlexei

I am trying to evaluate the relevance of a summary statistic I am using. My logic is the following :
Given the model producing the observable $m(\theta, \alpha) \rightarrow X$, with $\theta$ the parameters I am doing inference on, $X$ the observable and $\alpha$ an internal nuisance parameter of the model (e.g. measurement error),
If this model is a bijection from the space of possible $\theta$, let's call it $\Omega$, to the image, let's call it $m(\Omega)$ for a fixed parameter $\alpha$, then the observable I am using is ideal in the sense that running inference with sbi should give me the smallest possible posterior distribution of $\theta$, given the distribution of $\alpha$. (And on the contrary, if $m$ is not a bijection, that means that my observable will give me degeneracies in the input parameters, or that I will not be able to put the best constraint on some of them)
Then, I would want to evaluate the bijectivity of m. It should be automatically surjective from $\Omega$ to $m(\Omega)$. The injectivity can be evaluated by the jacobian - which is already computed as part of the Hessian in the tutorial "Active subspaces for sensitivity analysis". If I have access to the jacobian and if I can show that its rank is $N$ with $N$ the dimension of $\theta$, then I should have proven that $m$ is injective.

This should give information as to whether the summary statistic that I chose as my observable is a good way to retrieve the input parameters, or if it has inherent degeneracies that prevent me from retrieving the best possible posterior on the parameters.

I hope I am making sense, and if so, I hope it can be implemented in a similar fashion to the sensitivity analysis.

Edit : after doing some research I realize now that the information I am looking for is best described by the Fisher matrix, which is already evaluated in the sensitivity analysis as $\mathbb{E}_{p(\theta|x_0)} [ \nabla p(\theta|x_0)^T \nabla p(\theta | x_0)]$. Then, by evaluating this term for posterior log probability obtained with and without the summary statistics I can compare how close my summary statistic is ideal by comparing the determinants of the Fisher matrices.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions