|
1 | 1 | { |
2 | 2 | "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# Evolutionary optimization of a whole-brain model\n", |
| 8 | + "\n", |
| 9 | + "This notebook provides an example for the use of the evolutionary optimization framework built-in to the library. Under the hood, the implementation of the evolutionary algorithm is powered by `deap` and `pypet` cares about the parallelization and storage of the simulation data for us.\n", |
| 10 | + "\n", |
| 11 | + "We want to optimize a whole-brain network that should produce simulated BOLD activity (fMRI data) that is similar to the empirical dataset. We measure the fitness of each simulation by computing the `func.matrix_correlation` of the functional connectivity `func.fc(model.BOLD.BOLD)` to the empirical data `ds.FCs`. The ones that are closest to the empirical data get a higher fitness and have a higher chance of reproducing and survival. " |
| 12 | + ] |
| 13 | + }, |
3 | 14 | { |
4 | 15 | "cell_type": "code", |
5 | 16 | "execution_count": null, |
|
44 | 55 | "plt.rcParams['image.cmap'] = 'plasma'" |
45 | 56 | ] |
46 | 57 | }, |
| 58 | + { |
| 59 | + "cell_type": "markdown", |
| 60 | + "metadata": {}, |
| 61 | + "source": [ |
| 62 | + "We create a brain network model using the empirical dataset `ds`:" |
| 63 | + ] |
| 64 | + }, |
47 | 65 | { |
48 | 66 | "cell_type": "code", |
49 | 67 | "execution_count": 12, |
50 | 68 | "metadata": {}, |
51 | 69 | "outputs": [], |
52 | 70 | "source": [ |
53 | | - "aln = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat) # simulates the whole-brain model in 10s chunks by default if bold == True\n", |
| 71 | + "model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat) # simulates the whole-brain model in 10s chunks by default if bold == True\n", |
54 | 72 | "# Resting state fits\n", |
55 | | - "aln.params['mue_ext_mean'] = 1.57\n", |
56 | | - "aln.params['mui_ext_mean'] = 1.6\n", |
57 | | - "aln.params['sigma_ou'] = 0.09\n", |
58 | | - "aln.params['b'] = 5.0\n", |
59 | | - "aln.params['signalV'] = 2\n", |
60 | | - "aln.params['dt'] = 0.2\n", |
61 | | - "aln.params['duration'] = 0.2 * 60 * 1000 #ms\n", |
| 73 | + "model.params['mue_ext_mean'] = 1.57\n", |
| 74 | + "model.params['mui_ext_mean'] = 1.6\n", |
| 75 | + "model.params['sigma_ou'] = 0.09\n", |
| 76 | + "model.params['b'] = 5.0\n", |
| 77 | + "model.params['signalV'] = 2\n", |
| 78 | + "model.params['dt'] = 0.2\n", |
| 79 | + "model.params['duration'] = 0.2 * 60 * 1000 #ms\n", |
62 | 80 | "# testing: aln.params['duration'] = 0.2 * 60 * 1000 #ms\n", |
63 | 81 | "# real: aln.params['duration'] = 1.0 * 60 * 1000 #ms" |
64 | 82 | ] |
65 | 83 | }, |
| 84 | + { |
| 85 | + "cell_type": "markdown", |
| 86 | + "metadata": {}, |
| 87 | + "source": [ |
| 88 | + "Our evaluation function will do the following: first it will simulate the model for a short time to see whether there is any sufficient activity. This speeds up the evolution considerably, since large regions of the state space show almost no neuronal activity. Only then do we simulate the model for the full duration and compute the fitness using the empirical dataset." |
| 89 | + ] |
| 90 | + }, |
66 | 91 | { |
67 | 92 | "cell_type": "code", |
68 | | - "execution_count": 13, |
| 93 | + "execution_count": 1, |
69 | 94 | "metadata": {}, |
70 | 95 | "outputs": [], |
71 | 96 | "source": [ |
|
79 | 104 | " \n", |
80 | 105 | " # Stage 1 : simulate for a few seconds to see if there is any activity\n", |
81 | 106 | " # ---------------------------------------\n", |
82 | | - " model.params['dt'] = 0.1\n", |
83 | 107 | " model.params['duration'] = 3*1000.\n", |
84 | 108 | " model.run()\n", |
85 | 109 | " \n", |
86 | 110 | " # check if stage 1 was successful\n", |
87 | | - " if np.max(aln.rates_exc[:, aln.t > 500]) > 300 or np.max(aln.rates_exc[:, aln.t > 500]) < 10:\n", |
88 | | - " return invalid_result, {}\n", |
89 | | - " \n", |
90 | | - " # Stage 2: simulate BOLD for a few seconds to see if it moves\n", |
91 | | - " # ---------------------------------------\n", |
92 | | - " model.params['dt'] = 0.2\n", |
93 | | - " model.params['duration'] = 20*1000.\n", |
94 | | - " model.run(bold = True)\n", |
95 | | - " \n", |
96 | | - " if np.std(aln.BOLD.BOLD[:, 5:10]) < 0.001:\n", |
| 111 | + " if np.max(model.output[:, model.t > 500]) > 160 or np.max(model.output[:, model.t > 500]) < 10:\n", |
97 | 112 | " return invalid_result, {}\n", |
| 113 | + "\n", |
98 | 114 | " \n", |
99 | | - " # Stage 3: full and final simulation\n", |
| 115 | + " # Stage 2: full and final simulation\n", |
100 | 116 | " # ---------------------------------------\n", |
101 | | - " model.params['dt'] = 0.2\n", |
102 | 117 | " model.params['duration'] = defaultDuration\n", |
103 | 118 | " model.run(chunkwise=True, bold = True)\n", |
104 | 119 | " \n", |
|
118 | 133 | " return fitness_tuple, {}" |
119 | 134 | ] |
120 | 135 | }, |
| 136 | + { |
| 137 | + "cell_type": "markdown", |
| 138 | + "metadata": {}, |
| 139 | + "source": [ |
| 140 | + "We specify the parameter space that we want to search." |
| 141 | + ] |
| 142 | + }, |
121 | 143 | { |
122 | 144 | "cell_type": "code", |
123 | 145 | "execution_count": 14, |
|
128 | 150 | " [[0.0, 3.0], [0.0, 3.0], [0.0, 100.0], [0.0, 0.3], [0.0, 500.0], [0.0, 400.0]])" |
129 | 151 | ] |
130 | 152 | }, |
| 153 | + { |
| 154 | + "cell_type": "markdown", |
| 155 | + "metadata": {}, |
| 156 | + "source": [ |
| 157 | + "Note that we chose `algorithm='nsga2'` when we create the `Evolution`. This will use the multi-objective optimization algorithm by Deb et al. 2002. Although we have only one objective here (namely the FC fit), we could in principle add more objectives, like the `FCD` matrix fit or other objectives. For this, we would have to add these values to the fitness in the evaluation function above and add more `weights` in the definition of the `Evolution`. We can use positive weights for that objective to be maximized and negative ones for minimization. Please refer to the [DEAP documentation](https://deap.readthedocs.io/) for more information." |
| 158 | + ] |
| 159 | + }, |
131 | 160 | { |
132 | 161 | "cell_type": "code", |
133 | 162 | "execution_count": null, |
134 | 163 | "metadata": {}, |
135 | 164 | "outputs": [], |
136 | 165 | "source": [ |
137 | | - "evolution = Evolution(evaluateSimulation, pars, weightList = [1.0] * len(ds.BOLDs), model = aln, POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2, filename=\"example-2.2.hdf\")\n", |
138 | | - "#testing: evolution = Evolution(evaluateSimulation, pars, weightList = [1.0] * len(ds.BOLDs), model = aln, POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2)\n", |
139 | | - "# real: evolution = Evolution(evaluateSimulation, pars, weightList = [1.0] * len(ds.BOLDs), model = aln, POP_INIT_SIZE=1600, POP_SIZE = 160, NGEN=100)" |
| 166 | + "evolution = Evolution(evaluateSimulation, pars, algorithm = 'nsga2', weightList = [1.0] * len(ds.BOLDs), model = model, POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2, filename=\"example-2.2.hdf\")\n", |
| 167 | + "#testing: evolution = Evolution(evaluateSimulation, pars, algorithm = 'nsga2', weightList = [1.0] * len(ds.BOLDs), model = model, POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2)\n", |
| 168 | + "# real: evolution = Evolution(evaluateSimulation, pars, algorithm = 'nsga2', weightList = [1.0] * len(ds.BOLDs), model = model, POP_INIT_SIZE=1600, POP_SIZE = 160, NGEN=100)" |
| 169 | + ] |
| 170 | + }, |
| 171 | + { |
| 172 | + "cell_type": "markdown", |
| 173 | + "metadata": {}, |
| 174 | + "source": [ |
| 175 | + "That's it, we can run the evolution now." |
140 | 176 | ] |
141 | 177 | }, |
142 | 178 | { |
|
148 | 184 | "evolution.run(verbose = False)" |
149 | 185 | ] |
150 | 186 | }, |
| 187 | + { |
| 188 | + "cell_type": "markdown", |
| 189 | + "metadata": {}, |
| 190 | + "source": [ |
| 191 | + "We could now save the full evolution object for later analysis using `evolution.saveEvolution()`." |
| 192 | + ] |
| 193 | + }, |
151 | 194 | { |
152 | 195 | "cell_type": "markdown", |
153 | 196 | "metadata": {}, |
154 | 197 | "source": [ |
155 | 198 | "# Analysis" |
156 | 199 | ] |
157 | 200 | }, |
| 201 | + { |
| 202 | + "cell_type": "markdown", |
| 203 | + "metadata": {}, |
| 204 | + "source": [ |
| 205 | + "The `info()` method gives us a useful overview of the evolution, like a summary of the evolution parameters, the statistics of the population and also scatterplots of the individuals in our search space." |
| 206 | + ] |
| 207 | + }, |
158 | 208 | { |
159 | 209 | "cell_type": "code", |
160 | 210 | "execution_count": 17, |
|
297 | 347 | "plt.xlabel(\"Generation #\")\n", |
298 | 348 | "plt.ylabel(\"Score\")" |
299 | 349 | ] |
300 | | - }, |
301 | | - { |
302 | | - "cell_type": "code", |
303 | | - "execution_count": 26, |
304 | | - "metadata": {}, |
305 | | - "outputs": [], |
306 | | - "source": [ |
307 | | - "# if we were lazy, we could just dump the whole evolution object into a file:\n", |
308 | | - "# import dill\n", |
309 | | - "# fname = os.path.join(\"data/\", \"evolution-\" + evolution.trajectoryName + \".dill\")\n", |
310 | | - "# dill.dump(evolution, open(\"~evolution.dill\", \"wb\"))" |
311 | | - ] |
312 | 350 | } |
313 | 351 | ], |
314 | 352 | "metadata": { |
|
327 | 365 | "name": "python", |
328 | 366 | "nbconvert_exporter": "python", |
329 | 367 | "pygments_lexer": "ipython3", |
330 | | - "version": "3.7.4" |
| 368 | + "version": "3.7.3" |
331 | 369 | } |
332 | 370 | }, |
333 | 371 | "nbformat": 4, |
|
0 commit comments