Description
In some models, it is handy to constrain parameters dynamically (e.g., based on data or other parameters). This implies that the support of the parameter during simulation is different than parameter support during inference.
A simple (silly) example would be when we want to estimate an upper bound of a uniform distribution:
Then we know that upon observing
To make sure this constraint is respected during inference, it is possible to reparametrize the model
def prior():
return dict(u = np.random.exponential(1))
def likelihood(u):
return dict(x = np.random.uniform(low=0, high=u, size=10))
def reparam(u, x):
return dict(u_shifted=u-np.max(x))
simulator=bf.make_simulator([prior, likelihood, reparam])
then learn the unconstrained version of the shifted upper bound
adapter.constrain("u_shifted", lower=0).rename("u_shifted", "inference_variables")
The downside is that in order to obtain the posterior of u
during inference, one has to manually compute it as u_shifted + max(x)
. That leads me to a question whether the adapter could be changed to handle dynamic bounds so that both the forward and the backward transform is automatic?
While it sounds weird, this can come up in real applications. For example, this is a common feature of basic evidence accumulation models, where the non-decision time parameter cannot be larger than the observed response times. Stan for example also allows dynamic parameter constraints: https://mc-stan.org/docs/reference-manual/types.html#expressions-as-bounds-and-offsetmultiplier.