Skip to content

Commit 6288ba8

Browse files
authored
Merge pull request #71 from neurolib-dev/dev/exploration_tools
Exploration and Evolution: improvements
2 parents 7b5f461 + 748b04e commit 6288ba8

29 files changed

+2044
-686
lines changed

README.md

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -30,13 +30,15 @@
3030

3131
# What is neurolib?
3232

33-
`neurolib` allows you to build, simulate, and optimize your own state-of-the-art whole-brain models. To simulate the neural activity of each brain area, the main implementation provides an advanced neural mass mean-field model of spiking adaptive exponential integrate-and-fire neurons (AdEx) called `aln`. Each brain area is represented by two populations of excitatory and inhibitory neurons. An extensive analysis and validation of the `aln` model can be found in our [paper](https://arxiv.org/abs/1906.00676) and its associated [github page](https://github.com/caglarcakan/stimulus_neural_populations).
33+
Please read the [gentle introduction](https://caglorithm.github.io/notebooks/neurolib-intro/) to `neurolib` for an overview of the basic functionality and some background information on the science behind whole-brain simulations.
34+
35+
`neurolib` allows you to build, simulate, and optimize your own state-of-the-art whole-brain models. To simulate the neural activity of each brain area, the main implementation provides an advanced neural mass mean-field model of spiking adaptive exponential integrate-and-fire neurons (AdEx) called `aln`. Each brain area is represented by two populations of excitatory and inhibitory neurons. An extensive analysis and validation of the `aln` model can be found in our [paper](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007822) and its associated [github page](https://github.com/caglarcakan/stimulus_neural_populations).
3436

3537
`neurolib` provides a simulation and optimization framework which allows you to easily implement your own neural mass model, simulate fMRI BOLD activity, analyse the results and fit your model to empirical data.
3638

3739
Please reference the following paper if you use `neurolib` for your own research:
3840

39-
**Reference:** Cakan, C., Obermayer, K. (2020). Biophysically grounded mean-field models of neural populations under electrical stimulation. PLOS Computational Biology ([ArXiv](https://arxiv.org/abs/1906.00676)).
41+
**Reference:** Cakan, C., Obermayer, K. (2020). Biophysically grounded mean-field models of neural populations under electrical stimulation. PLOS Computational Biology ([Link](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007822)).
4042

4143
The figure below shows a schematic of how a brain network can be constructed:
4244

@@ -145,7 +147,7 @@ aln.params['duration'] = 5*60*1000 # in ms, simulates for 5 minutes
145147

146148
aln.run(bold=True)
147149
```
148-
This can take several minutes to compute, since we are simulating 82 nodes for 5 minutes realtime. Note that we specified `bold=True` which simulates the BOLD model in parallel to the neuronal model. The resulting firing rates and BOLD functional connectivity looks like this:
150+
This can take several minutes to compute, since we are simulating 80 brain regions for 5 minutes realtime. Note that we specified `bold=True` which simulates the BOLD model in parallel to the neuronal model. The resulting firing rates and BOLD functional connectivity looks like this:
149151
<p align="center">
150152
<img src="https://github.com/neurolib-dev/neurolib/raw/master/resources/gw_simulated.png">
151153
</p>
@@ -200,13 +202,21 @@ We can plot the results to get something close to a bifurcation diagram!
200202

201203
A detailed example is available as a [IPython Notebook](examples/example-2-evolutionary-optimization-minimal.ipynb).
202204

203-
`neurolib` also implements evolutionary parameter optimization, which works particularly well with brain networks. In an evolutionary algorithm, each simulation is represented as an individual and the parameters of the simulation, for example coupling strengths or noise level values, are represented as the genes of each individual. An individual is a part of a population. In each generation, individuals are evaluated and ranked according to a fitness criterion. For whole-brain network simulations, this could be the fit of the simulated activity to empirical data. Then, individuals with a high fitness value are `selected` as parents and `mate` to create offspring. These offspring undergo random `mutations` of their genes. After all offspring are evaluated, the best individuals of the population are selected to transition into the next generation. This process goes on for a given amount generations until a stopping criterion is reached. This could be a predefined maximum number of generations or when a large enough population with high fitness values is found.
205+
`neurolib` also implements evolutionary parameter optimization, which works particularly well with brain networks. In an evolutionary algorithm, each simulation is represented as an individual and the parameters of the simulation, for example coupling strengths or noise level values, are represented as the genes of each individual. An individual is a part of a population. In each generation, individuals are evaluated and ranked according to a fitness criterion. For whole-brain network simulations, this could be the fit of the simulated activity to empirical data. Then, individuals with a high fitness value are `selected` as parents and `mate` to create offspring. These offspring undergo random `mutations` of their genes. After all offspring are evaluated, the best individuals of the population are selected to transition into the next generation. This process goes on for a given amount generations until a stopping criterion is reached. This could be a predefined maximum number of generations or when a large enough population with high fitness values is found.
206+
207+
An example genealogy tree is shown below. You can see the evolution starting at the top and individuals reproducing generation by generation. The color indicates the fitness.
208+
209+
<p align="center">
210+
<img src="https://github.com/neurolib-dev/neurolib/raw/master/resources/evolution_tree.png", width="600">
211+
</p>
212+
213+
`neurolib` makes it very easy to set up your own evolutionary optimization and everything else is handled under the hood. You can chose between two implemented evolutionary algorithms: `adaptive` is a gaussian mutation and rank selection algorithm with adaptive step size that ensures convergence (a schematic is shown in the image below). `nsga2` is an implementation of the popular multi-objective optimization algorithm by Deb et al. 2002.
204214

205215
<p align="center">
206216
<img src="https://github.com/neurolib-dev/neurolib/raw/master/resources/evolutionary-algorithm.png", width="600">
207217
</p>
208218

209-
`neurolib` makes it very easy to set up your own evolutionary optimization and everything else is handled under the hood. Of course, if you like, you can dig deeper, define your own selection, mutation and mating operators. In the following demonstration, we will simply evaluate the fitness of each individual as the distance to the unit circle. After a couple of generations of mating, mutating and selecting, only individuals who are close to the circle should survive:
219+
Of course, if you like, you can dig deeper, define your own selection, mutation and mating operators. In the following demonstration, we will simply evaluate the fitness of each individual as the distance to the unit circle. After a couple of generations of mating, mutating and selecting, only individuals who are close to the circle should survive:
210220

211221
```python
212222
from neurolib.utils.parameterSpace import ParameterSpace

codecov.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,5 +12,6 @@ coverage:
1212
- "neurolib/models/thalamus/timeIntegration.py"
1313
- "neurolib/models/bold/timeIntegration.py"
1414
- "neurolib/utils/devUtils.py"
15-
status:
16-
patch: off
15+
- "neurolib/utils/atlases.py"
16+
status:
17+
patch: off

examples/example-1.2-brain-network-exploration.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -546,7 +546,7 @@
546546
"metadata": {},
547547
"outputs": [],
548548
"source": [
549-
"search.dfResults = eu.processExplorationResults(search.results, search.dfResults, model=model, ds=ds, bold_transient=10000)"
549+
"eu.processExplorationResults(search, model=model, ds=ds, bold_transient=10000)"
550550
]
551551
},
552552
{
@@ -842,7 +842,7 @@
842842
"name": "python",
843843
"nbconvert_exporter": "python",
844844
"pygments_lexer": "ipython3",
845-
"version": "3.7.3"
845+
"version": "3.7.4"
846846
}
847847
},
848848
"nbformat": 4,

examples/example-2-evolutionary-optimization-minimal.ipynb

Lines changed: 6 additions & 115 deletions
Large diffs are not rendered by default.

examples/example-2.1-evolutionary-optimization-aln.ipynb

Lines changed: 153 additions & 143 deletions
Large diffs are not rendered by default.

examples/example-2.2-evolution-brain-network-aln-resting-state-fit.ipynb

Lines changed: 76 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,16 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Evolutionary optimization of a whole-brain model\n",
8+
"\n",
9+
"This notebook provides an example for the use of the evolutionary optimization framework built-in to the library. Under the hood, the implementation of the evolutionary algorithm is powered by `deap` and `pypet` cares about the parallelization and storage of the simulation data for us.\n",
10+
"\n",
11+
"We want to optimize a whole-brain network that should produce simulated BOLD activity (fMRI data) that is similar to the empirical dataset. We measure the fitness of each simulation by computing the `func.matrix_correlation` of the functional connectivity `func.fc(model.BOLD.BOLD)` to the empirical data `ds.FCs`. The ones that are closest to the empirical data get a higher fitness and have a higher chance of reproducing and survival. "
12+
]
13+
},
314
{
415
"cell_type": "code",
516
"execution_count": null,
@@ -44,28 +55,42 @@
4455
"plt.rcParams['image.cmap'] = 'plasma'"
4556
]
4657
},
58+
{
59+
"cell_type": "markdown",
60+
"metadata": {},
61+
"source": [
62+
"We create a brain network model using the empirical dataset `ds`:"
63+
]
64+
},
4765
{
4866
"cell_type": "code",
4967
"execution_count": 12,
5068
"metadata": {},
5169
"outputs": [],
5270
"source": [
53-
"aln = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat) # simulates the whole-brain model in 10s chunks by default if bold == True\n",
71+
"model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat) # simulates the whole-brain model in 10s chunks by default if bold == True\n",
5472
"# Resting state fits\n",
55-
"aln.params['mue_ext_mean'] = 1.57\n",
56-
"aln.params['mui_ext_mean'] = 1.6\n",
57-
"aln.params['sigma_ou'] = 0.09\n",
58-
"aln.params['b'] = 5.0\n",
59-
"aln.params['signalV'] = 2\n",
60-
"aln.params['dt'] = 0.2\n",
61-
"aln.params['duration'] = 0.2 * 60 * 1000 #ms\n",
73+
"model.params['mue_ext_mean'] = 1.57\n",
74+
"model.params['mui_ext_mean'] = 1.6\n",
75+
"model.params['sigma_ou'] = 0.09\n",
76+
"model.params['b'] = 5.0\n",
77+
"model.params['signalV'] = 2\n",
78+
"model.params['dt'] = 0.2\n",
79+
"model.params['duration'] = 0.2 * 60 * 1000 #ms\n",
6280
"# testing: aln.params['duration'] = 0.2 * 60 * 1000 #ms\n",
6381
"# real: aln.params['duration'] = 1.0 * 60 * 1000 #ms"
6482
]
6583
},
84+
{
85+
"cell_type": "markdown",
86+
"metadata": {},
87+
"source": [
88+
"Our evaluation function will do the following: first it will simulate the model for a short time to see whether there is any sufficient activity. This speeds up the evolution considerably, since large regions of the state space show almost no neuronal activity. Only then do we simulate the model for the full duration and compute the fitness using the empirical dataset."
89+
]
90+
},
6691
{
6792
"cell_type": "code",
68-
"execution_count": 13,
93+
"execution_count": 1,
6994
"metadata": {},
7095
"outputs": [],
7196
"source": [
@@ -79,26 +104,16 @@
79104
" \n",
80105
" # Stage 1 : simulate for a few seconds to see if there is any activity\n",
81106
" # ---------------------------------------\n",
82-
" model.params['dt'] = 0.1\n",
83107
" model.params['duration'] = 3*1000.\n",
84108
" model.run()\n",
85109
" \n",
86110
" # check if stage 1 was successful\n",
87-
" if np.max(aln.rates_exc[:, aln.t > 500]) > 300 or np.max(aln.rates_exc[:, aln.t > 500]) < 10:\n",
88-
" return invalid_result, {}\n",
89-
" \n",
90-
" # Stage 2: simulate BOLD for a few seconds to see if it moves\n",
91-
" # ---------------------------------------\n",
92-
" model.params['dt'] = 0.2\n",
93-
" model.params['duration'] = 20*1000.\n",
94-
" model.run(bold = True)\n",
95-
" \n",
96-
" if np.std(aln.BOLD.BOLD[:, 5:10]) < 0.001:\n",
111+
" if np.max(model.output[:, model.t > 500]) > 160 or np.max(model.output[:, model.t > 500]) < 10:\n",
97112
" return invalid_result, {}\n",
113+
"\n",
98114
" \n",
99-
" # Stage 3: full and final simulation\n",
115+
" # Stage 2: full and final simulation\n",
100116
" # ---------------------------------------\n",
101-
" model.params['dt'] = 0.2\n",
102117
" model.params['duration'] = defaultDuration\n",
103118
" model.run(chunkwise=True, bold = True)\n",
104119
" \n",
@@ -118,6 +133,13 @@
118133
" return fitness_tuple, {}"
119134
]
120135
},
136+
{
137+
"cell_type": "markdown",
138+
"metadata": {},
139+
"source": [
140+
"We specify the parameter space that we want to search."
141+
]
142+
},
121143
{
122144
"cell_type": "code",
123145
"execution_count": 14,
@@ -128,15 +150,29 @@
128150
" [[0.0, 3.0], [0.0, 3.0], [0.0, 100.0], [0.0, 0.3], [0.0, 500.0], [0.0, 400.0]])"
129151
]
130152
},
153+
{
154+
"cell_type": "markdown",
155+
"metadata": {},
156+
"source": [
157+
"Note that we chose `algorithm='nsga2'` when we create the `Evolution`. This will use the multi-objective optimization algorithm by Deb et al. 2002. Although we have only one objective here (namely the FC fit), we could in principle add more objectives, like the `FCD` matrix fit or other objectives. For this, we would have to add these values to the fitness in the evaluation function above and add more `weights` in the definition of the `Evolution`. We can use positive weights for that objective to be maximized and negative ones for minimization. Please refer to the [DEAP documentation](https://deap.readthedocs.io/) for more information."
158+
]
159+
},
131160
{
132161
"cell_type": "code",
133162
"execution_count": null,
134163
"metadata": {},
135164
"outputs": [],
136165
"source": [
137-
"evolution = Evolution(evaluateSimulation, pars, weightList = [1.0] * len(ds.BOLDs), model = aln, POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2, filename=\"example-2.2.hdf\")\n",
138-
"#testing: evolution = Evolution(evaluateSimulation, pars, weightList = [1.0] * len(ds.BOLDs), model = aln, POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2)\n",
139-
"# real: evolution = Evolution(evaluateSimulation, pars, weightList = [1.0] * len(ds.BOLDs), model = aln, POP_INIT_SIZE=1600, POP_SIZE = 160, NGEN=100)"
166+
"evolution = Evolution(evaluateSimulation, pars, algorithm = 'nsga2', weightList = [1.0] * len(ds.BOLDs), model = model, POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2, filename=\"example-2.2.hdf\")\n",
167+
"#testing: evolution = Evolution(evaluateSimulation, pars, algorithm = 'nsga2', weightList = [1.0] * len(ds.BOLDs), model = model, POP_INIT_SIZE=4, POP_SIZE = 4, NGEN=2)\n",
168+
"# real: evolution = Evolution(evaluateSimulation, pars, algorithm = 'nsga2', weightList = [1.0] * len(ds.BOLDs), model = model, POP_INIT_SIZE=1600, POP_SIZE = 160, NGEN=100)"
169+
]
170+
},
171+
{
172+
"cell_type": "markdown",
173+
"metadata": {},
174+
"source": [
175+
"That's it, we can run the evolution now."
140176
]
141177
},
142178
{
@@ -148,13 +184,27 @@
148184
"evolution.run(verbose = False)"
149185
]
150186
},
187+
{
188+
"cell_type": "markdown",
189+
"metadata": {},
190+
"source": [
191+
"We could now save the full evolution object for later analysis using `evolution.saveEvolution()`."
192+
]
193+
},
151194
{
152195
"cell_type": "markdown",
153196
"metadata": {},
154197
"source": [
155198
"# Analysis"
156199
]
157200
},
201+
{
202+
"cell_type": "markdown",
203+
"metadata": {},
204+
"source": [
205+
"The `info()` method gives us a useful overview of the evolution, like a summary of the evolution parameters, the statistics of the population and also scatterplots of the individuals in our search space."
206+
]
207+
},
158208
{
159209
"cell_type": "code",
160210
"execution_count": 17,
@@ -297,18 +347,6 @@
297347
"plt.xlabel(\"Generation #\")\n",
298348
"plt.ylabel(\"Score\")"
299349
]
300-
},
301-
{
302-
"cell_type": "code",
303-
"execution_count": 26,
304-
"metadata": {},
305-
"outputs": [],
306-
"source": [
307-
"# if we were lazy, we could just dump the whole evolution object into a file:\n",
308-
"# import dill\n",
309-
"# fname = os.path.join(\"data/\", \"evolution-\" + evolution.trajectoryName + \".dill\")\n",
310-
"# dill.dump(evolution, open(\"~evolution.dill\", \"wb\"))"
311-
]
312350
}
313351
],
314352
"metadata": {
@@ -327,7 +365,7 @@
327365
"name": "python",
328366
"nbconvert_exporter": "python",
329367
"pygments_lexer": "ipython3",
330-
"version": "3.7.4"
368+
"version": "3.7.3"
331369
}
332370
},
333371
"nbformat": 4,

neurolib/models/bold/timeIntegration.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ def simulateBOLD(Z, dt, voxelCounts, X=None, F=None, Q=None, V=None):
3131
if "voxelCounts" not in globals():
3232
voxelCounts = np.ones((N,))
3333

34-
# Balloon-Windkessel model parameters (Deco 2013, Friston 2003):
34+
# Balloon-Windkessel model parameters (Deco 2013, Friston 2000):
3535
# Note: the distribution of each Balloon-Windkessel models parameters are given per voxel
3636
# Since we usually average the empirical fMRI of each voxel for a given area, the standard
3737
# deviation of the gaussian distribution should be divided by the number of voxels in each area

neurolib/models/model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ def simulateBold(self, t, variables, append=False):
9292
self.setOutput("BOLD.BOLD", BOLD)
9393
else:
9494
logging.warn(
95-
f"Will not simulate BOLD if output {bold_input.shape[1]} not at least of duration {self.boldModel.samplingRate_NDt*self.params['dt']}"
95+
f"Will not simulate BOLD if output {bold_input.shape[1]*self.params['dt']} not at least of duration {self.boldModel.samplingRate_NDt*self.params['dt']}"
9696
)
9797
else:
9898
logging.warn("BOLD model not initialized, not simulating BOLD. Use `run(bold=True)`")

neurolib/optimize/evolution/deapUtils.py

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,14 @@
1313
# return ParametersInterval(*(individual[: len(paramInterval)]))._asdict().copy()
1414

1515

16+
def randomParameters(paramInterval):
17+
"""
18+
Generate a sequence of random parameters from a ParamsInterval using a uniform distribution.
19+
Format: [mean_par1, mean_par2, ...]
20+
"""
21+
params = [np.random.uniform(*pI) for pI in paramInterval]
22+
return params
23+
1624
def randomParametersAdaptive(paramInterval):
1725
"""
1826
Generate a sequence of random parameters from a ParamsInterval using a uniform distribution.
@@ -28,7 +36,7 @@ def randomParametersAdaptive(paramInterval):
2836
return params
2937

3038

31-
def mutateUntilValid(pop, paramInterval, toolbox, maxTries=0):
39+
def mutateUntilValid(pop, paramInterval, toolbox, MUTATE_P={}, maxTries=100):
3240
"""Checks the validity of new individuals' parameter. If they are invalid
3341
(for example if they are out of the predefined paramter space bounds),
3442
mutate the individual, until valid.
@@ -42,7 +50,7 @@ def mutateUntilValid(pop, paramInterval, toolbox, maxTries=0):
4250
for i, ind in enumerate(pop):
4351

4452
ind_bak = copy.copy(ind)
45-
toolbox.mutate(pop[i])
53+
toolbox.mutate(pop[i], **MUTATE_P)
4654

4755
nMutations = 0
4856
while not checkParamValidity(pop[i], paramInterval) and nMutations < maxTries:
@@ -329,4 +337,3 @@ def gaussianAdaptiveMutation_nStepSizes(individual, gamma_gl=None, gamma=None):
329337
individual[:] = newParams + newSigmas
330338

331339
return (individual,)
332-

0 commit comments

Comments
 (0)