|
3 | 3 | This package is a grid-based approach for performing Bayesian adaptive design optimization. After each observation, the optimizer chooses an experimental design that maximizes mutual information between model parameters and design parameters. In so doing, the optimizer selects designs that minimize the variance in the posterior distribution of model parameters.
|
4 | 4 | # Example
|
5 | 5 |
|
6 |
| -In this example, we will optimize a decision making experiment for the model called Transfer of Attention Exchange (TAX; Birnbaum, 2008). Additional examples can be found in the folder titled Examples. sds |
| 6 | +In this example, we will optimize a decision making experiment for the model called Transfer of Attention Exchange (TAX; Birnbaum, 2008). Additional examples can be found in the folder titled Examples. |
7 | 7 |
|
8 | 8 | ```julia
|
9 | 9 | using AdaptiveDesignOptimization, Random, UtilityModels, Distributions
|
@@ -73,6 +73,53 @@ data_list = (choice=[true, false],)
|
73 | 73 | ```
|
74 | 74 | ## Optimize Exeriment
|
75 | 75 |
|
| 76 | +In the following code blocks, we will run an optimized experiment and a random experiment. The first step is to generate the optimizer with the contructor `Optimizer`. Next, we specify true parameters for generating data from the model and initialize a `DataFrame` to collect the results on each simulated trial. In the experiment loop, data are generated with `simulate`. The data are passed to `update` in order to optimize the experiment for the next trial. Finally, the mean and standard deviation are added to the `DataFrame` for each parameter. A similar process is used to perform the random experiment. |
| 77 | + |
| 78 | +```julia |
| 79 | +using DataFrames |
| 80 | +true_parms = (δ=-1.0, β=1.0, γ=.7, θ=1.5) |
| 81 | +n_trials = 100 |
| 82 | +optimizer = Optimizer(;design_list, parm_list, data_list, model) |
| 83 | +design = optimizer.best_design |
| 84 | +df = DataFrame(design=Symbol[], trial=Int[], mean_δ=Float64[], mean_β=Float64[], |
| 85 | + mean_γ=Float64[], mean_θ=Float64[], std_δ=Float64[], std_β=Float64[], |
| 86 | + std_γ=Float64[], std_θ=Float64[]) |
| 87 | +new_data = [:optimal, 0, mean_post(optimizer)..., std_post(optimizer)...] |
| 88 | +push!(df, new_data) |
| 89 | + |
| 90 | +for trial in 1:n_trials |
| 91 | + data = simulate(true_parms..., design...) |
| 92 | + design = update!(optimizer, data) |
| 93 | + new_data = [:optimal, trial, mean_post(optimizer)..., std_post(optimizer)...] |
| 94 | + push!(df, new_data) |
| 95 | +end |
| 96 | +``` |
| 97 | +## Random Experiment |
| 98 | +```julia |
| 99 | +randomizer = Optimizer(;design_list, parm_list, data_list, model, design_type=Randomize); |
| 100 | +design = randomizer.best_design |
| 101 | +new_data = [:random, 0, mean_post(randomizer)..., std_post(randomizer)...] |
| 102 | +push!(df, new_data) |
| 103 | + |
| 104 | +for trial in 1:n_trials |
| 105 | + data = simulate(true_parms..., design...) |
| 106 | + design = update!(randomizer, data) |
| 107 | + new_data = [:random, trial, mean_post(randomizer)..., std_post(randomizer)...] |
| 108 | + push!(df, new_data) |
| 109 | +end |
| 110 | +``` |
| 111 | + |
| 112 | +## Results |
| 113 | + |
| 114 | +As expected, in the figure below, the posterior standard deviation of δ is smaller for the optimal experiment compared to the random experiment. |
| 115 | + |
| 116 | + |
| 117 | +```julia |
| 118 | +using StatsPlots |
| 119 | +@df df plot(:trial, :std_δ, xlabel="trial", ylabel="σ of δ", grid=false, group=:design, linewidth=2, ylims=(0,1.5), size=(600,400)) |
| 120 | +``` |
| 121 | + |
| 122 | +<img src="Examples/Monetary_Gambles/results.png" alt="" width="500" height="300"> |
76 | 123 |
|
77 | 124 | # References
|
78 | 125 |
|
|
0 commit comments