Skip to content

Commit 2ca24e3

Browse files
committed
up
1 parent 456fbb7 commit 2ca24e3

File tree

2 files changed

+26
-10
lines changed

2 files changed

+26
-10
lines changed

docs/Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
1111
NonlinearSolve = "8913a72c-1f9b-4ce2-8d82-65094dcecaec"
1212
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
1313
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
14+
OptimizationBBO = "3e6eede4-6085-4f62-9a71-46d9bc1eb92b"
1415
OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1"
1516
OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
1617
PEtab = "48d54b35-e43e-4a66-a5a1-dde6b987cf69"

docs/src/inverse_problems/behaviour_optimization.md

Lines changed: 25 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# Optimization for non-data fitting purposes
2-
In previous tutorials we have described how to use [PEtab.jl](@ref petab_parameter_fitting) and [Optimization.jl](@ref optimization_parameter_fitting) for parameter fitting. This involves solving an optimisation problem (to find the parameter set best fitting the data). There are, however, other situations that requires solving optimisation problems. Typically, these involves the creation of a custom cost function, which optimum can then be found using Optimization.jl. In this tutorial we will describe this process, demonstrating how parameter space can be searched to find values that achieves a desired system behaviour.
2+
In previous tutorials we have described how to use [PEtab.jl](@ref petab_parameter_fitting) and [Optimization.jl](@ref optimization_parameter_fitting) for parameter fitting. This involves solving an optimisation problem (to find the parameter set yielding the best model-to-data fit). There are, however, other situations that require solving optimisation problems. Typically, these involve the creation of a custom cost function, which optimum can then be found using Optimization.jl. In this tutorial we will describe this process, demonstrating how parameter space can be searched to find values that achieve a desired system behaviour. A more throughout description on how to solve these problems are provided by [Optimization.jl's documentation](https://docs.sciml.ai/Optimization/stable/).
33

4-
## Maximising the pulse amplitude of a incoherent feed forward loops.
5-
Incoherent feedforward loops (network motifs where a single components both activates and deactivates a downstream component) are able to generate pules in response to step inputs [^1]. In this tutorial we will consider such a incoherent feedforward loop, attempting to generate a system with as prominent response pulse as possible. Our model consists of 3 species: $X$ (the input node), $Y$ (an intermediary), and $Z$ (the output node). In it, $X$ activates the production of both $Y$ and $Z$, with $Y$ also deactivating $Z$. When $X$ is activated, their will be a brief time window where $Y$ is still inactive, and $Z$ is activated. However, as $Y$ becomes active, it will turn $Z$ off, creating a pulse.
4+
## Maximising the pulse amplitude of an incoherent feed forward loop.
5+
Incoherent feedforward loops (network motifs where a single component both activates and deactivates a downstream component) are able to generate pulses in response to step inputs [^1]. In this tutorial we will consider such an incoherent feedforward loop, attempting to generate a system with as prominent a response pulse as possible. Our model consists of 3 species: $X$ (the input node), $Y$ (an intermediary), and $Z$ (the output node). In it, $X$ activates the production of both $Y$ and $Z$, with $Y$ also deactivating $Z$. When $X$ is activated, there will be a brief time window where $Y$ is still inactive, and $Z$ is activated. However, as $Y$ becomes active, it will turn $Z$ off, creating a pulse.
66
```@example behaviour_optimization
77
using Catalyst
88
incoherent_feed_forward = @reaction_network begin
@@ -20,7 +20,7 @@ example_p = [pX => 0.1, pY => 1.0, pZ => 1.0]
2020
example_u0 = [X => 0.1, Y => 0.1, Z => 1.0]
2121
ode_prob = ODEProblem(incoherent_feed_forward, example_u0, (0.0,50.0), example_p)
2222
```
23-
To trigger he activation of $X$ we will use a [callback](@ref advanced_simulations_callbacks), increasing its production rate ($pX$) by a factor of $10$ at the time $t=10.0$. We supply the callback to our simulation, and plot the result
23+
To trigger the activation of $X$ we will use a [callback](@ref advanced_simulations_callbacks), increasing its production rate ($pX$) by a factor of $10$ at the time $t=10.0$. We supply the callback to our simulation, and plot the result:
2424
```@example behaviour_optimization
2525
using Plots
2626
activation_cb = PresetTimeCallback([10.0], int -> int[pX] *= 10.0)
@@ -40,18 +40,18 @@ function pulse_amplitude(p, _)
4040
end
4141
nothing # here
4242
```
43-
The cost function takes two arguments (a parameter value `p`, and an additional one which we will ignore). It first calculates the new initial steady concentration (for the given parameter set), and then creates an updated `ODEProblem` using it as initial conditions (and the new parameter set as parameters). While we could create a new `ODEProblem` within the cost function, these functions are often called a large number of times during the optimization process (making performance important). Here, using [`remake` on a previously created `ODEProblem`](@id simulation_structure_interfacing_remake) is more performant than creating an new one. Next, we simulate our, remembering to use the activation callback. Just like when using Optimization.jl to fit parameters, we use the `verbose=false` options to prevent unnecessary print outs, and a reduced `maxiters` value to reduce time spent simulating (to the model) unsuitable parameter sets. We also use `SciMLBase.successful_retcode(sol)` to check whether the simulation return code indicate a successful simulation (and if it did not, returns a large cost function value). Finally, Optimization.jl finds the function's *minimum value*, so to find the *maximum* relative pulse amplitude, we make our cost function return the negation of that value.
43+
The cost function takes two arguments (a parameter value `p`, and an additional one which we will ignore here). It first calculates the new initial steady concentration (for the given parameter set), and then creates an updated `ODEProblem` using it as initial conditions and the, to the cost function provided, input parameter set. While we could create a new `ODEProblem` within the cost function, cost functions are often called a large number of times during the optimisation process (making performance important). Here, using [`remake` on a previously created `ODEProblem`](@id simulation_structure_interfacing_remake) is more performant than creating a new one. Next, we simulate our, remembering to use the activation callback. Just like [when using Optimization.jl to fit parameters](), we use the `verbose=false` options to prevent unnecessary printouts, and a reduced `maxiters` value to reduce time spent simulating (to the model) unsuitable parameter sets. We also use `SciMLBase.successful_retcode(sol)` to check whether the simulation return code indicates a successful simulation (and if it did not, returns a large cost function value). Finally, Optimization.jl finds the function's *minimum value*, so to find the *maximum* relative pulse amplitude, we make our cost function return the negation of that value.
4444

4545
Just like for [parameter fitting](@ref optimization_parameter_fitting), we create a `OptimizationProblem` using our cost function, and some initial guess of the parameter value. We also set upper and lower bounds for each parameter using the `lb` and `ub` optional arguments (in this case limiting each parameter's value to the interval $(0.1,10.0)$).
4646
```@example behaviour_optimization
4747
using Optimization
4848
initial_guess = [1.0, 1.0, 1.0]
49-
opt_prob = OptimizationProblem(cost_function, initial_guess; lb = [1e-1, 1e-1, 1e-1], ub = [1e1, 1e1, 1e1])
49+
opt_prob = OptimizationProblem(pulse_amplitude, initial_guess; lb = [1e-1, 1e-1, 1e-1], ub = [1e1, 1e1, 1e1])
5050
```
5151
!!! note
52-
As described in a previous section on Optimization.jl, `OptimizationProblem`s do not support setting parameter values using maps. We must instead set `initial_guess`'s values using a vector. Here, the i'th value correspond to the value of the i'th parameter in the `parameters(incoherent_feed_forward)` vector.
52+
As described in a [previous section on Optimization.jl](), `OptimizationProblem`s do not support setting parameter values using maps. We must instead set `initial_guess`'s values using a vector. Here, the i'th value corresponds to the value of the i'th parameter in the `parameters(incoherent_feed_forward)` vector.
5353

54-
Like described here, Optimization.jl supports a wide range of optimisation algorithms. Here we use one from BlackBoxOptim.jl
54+
As [previously described](), Optimization.jl supports a wide range of optimisation algorithms. Here we use one from [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl)
5555
```@example behaviour_optimization
5656
using OptimizationBBO
5757
opt_sol = solve(opt_prob, BBO_adaptive_de_rand_1_bin_radiuslimited())
@@ -63,11 +63,26 @@ oprob_result = remake(ode_prob; u0=u0_result, p=opt_sol.u)
6363
sol_result = solve(oprob_result, Tsit5(); callback=activation_cb)
6464
plot(sol_result; lw=4, idxs=Z)
6565
```
66-
For this model, it actually turns out that $Z$'s maximum pulse amplitude is equal to twice its steady state concentration. Hence, the maximisation of its pulse amplitude is equivalent to maximising its steady state concentration.
66+
For this model, it turns out that $Z$'s maximum pulse amplitude is equal to twice its steady state concentration. Hence, the maximisation of its pulse amplitude is equivalent to maximising its steady state concentration.
6767

6868
!!! note
69-
Especially if you check Optimization.jl's documentation, you will note that cost functions have teh `f(u,p)` form. This is because `OptimizationProblem`s (like e.g. `ODEProblem`s) can take both variables (which can be varied in the optimisation problem), but also parameter that are fixed. In our case, the *optimisation variables* correspond to our *model parameters*. Hence, our model parameter values is the `u` input. This is also why we find the optimisation solution (our optimised parameter set) in `opt_sol`'s `u` field. Our optimisation problem does not actually have any parameters, hence, the second argument of `pulse_amplitude` is unused (that is why we call it `_`, a name commonly indicating unused function arguments). There are several modifications to our problem where it would actually have parameters. E.g. our model might have had additional parameters (e.g. a degradation rate) which we would like to keep fixed throughout the optimisation process. If we then would like to run the optimisation process for several different values of these fixed parameters, we could have made them parameters to our `OptimizationProblem` (and their values provided as a third argument, after `initial_guess`).
69+
Especially if you check Optimization.jl's documentation, you will note that cost functions have the `f(u,p)` form. This is because `OptimizationProblem`s (like e.g. `ODEProblem`s) can take both variables (which can be varied in the optimisation problem), but also parameters that are fixed. In our case, the *optimisation variables* correspond to our *model parameters*. Hence, our model parameter values ar the `u` input. This is also why we find the optimisation solution (our optimised parameter set) in `opt_sol`'s `u` field. Our optimisation problem does not actually have any parameters, hence, the second argument of `pulse_amplitude` is unused (that is why we call it `_`, a name commonly indicating unused function arguments).
70+
71+
There are several modifications to our problem where it would actually have parameters. E.g. our model might have had additional parameters (e.g. a degradation rate) which we would like to keep fixed throughout the optimisation process. If we then would like to run the optimisation process for several different values of these fixed parameters, we could have made them parameters to our `OptimizationProblem` (and their values provided as a third argument, after `initial_guess`).
7072

73+
## Utilising automatic differentiation
74+
Optimisation methods can be divided into differentiation-free and differentiation-based optimisation methods. E.g. consider finding the minimum of the function $f(x) = x^2$, given some initial condition. Here, we can simply compute the differential and descend along it until we find $x=0$ (admittedly, for this simple problem the minimum can be computed directly). This principle forms the basis of optimisation methods such as gradient descent, which utilises information of a function's differential to minimise it. When attempting to find a global minimum, to avoid getting stuck in local minimums, these methods are often augmented by additional routines. While the differention of most algebraic functions is trivial, it turns out that even complicated functions (such as the one we used above) can be differentiated through the use of [*automatic differentiation* (AD)](https://en.wikipedia.org/wiki/Automatic_differentiation).
75+
76+
Through packages such as [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl), [ReverseDiff.jl](https://github.com/JuliaDiff/ReverseDiff.jl), and [Zygote.jl](https://github.com/FluxML/Zygote.jl), Julia supports AD for most code. Specifically for code including simulation of differential equations, differentiation is supported by [SciMLSensitivity.jl](https://github.com/SciML/SciMLSensitivity.jl). Generally, AD can be used without specific knowledge from the user, however, it requires an additional step in the construction of our `OptimizationProblem`. Here, we create a [specialised `OptimizationFunction` from our cost function](https://docs.sciml.ai/Optimization/stable/API/optimization_function/#optfunction). To it, we will also provide our choice of AD method. There are [several alternatives](https://docs.sciml.ai/Optimization/stable/API/optimization_function/#Automatic-Differentiation-Construction-Choice-Recommendations), and in our case we will use `AutoForwardDiff()` (a good choice for small optimisation problems). We then create a new `OptimizationProblem` using our updated cast function:
77+
```@example behaviour_optimization
78+
opt_func = OptimizationFunction(pulse_amplitude, AutoForwardDiff())
79+
opt_prob = OptimizationProblem(opt_func, initial_guess; lb = [1e-1, 1e-1, 1e-1], ub = [1e1, 1e1, 1e1])
80+
```
81+
Finally, we can find the optimum using some differentiation-based optimisation methods. Here we will use [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl)'s `BFGS` method
82+
```@example behaviour_optimization
83+
using OptimizationOptimJL
84+
opt_sol = solve(opt_prob, OptimizationOptimJL.BFGS())
85+
```
7186

7287
---
7388
## References

0 commit comments

Comments
 (0)