Skip to content

Commit 9e71263

Browse files
author
Alexander Ororbia
committed
cleaned up dyn-syn/lesson
1 parent 0a5eb08 commit 9e71263

File tree

7 files changed

+37
-39
lines changed

7 files changed

+37
-39
lines changed

docs/images/tutorials/neurocog/expsyn.jpg

100644100755
File mode changed.

docs/museum/rl_snn.md

+2-3
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,8 @@ exhibit can be found
1313

1414
Operant conditioning refers to the idea that there are environmental stimuli that can either increase or decrease the occurrence of (voluntary) behaviors; in other words, positive stimuli can lead to future repeats of a certain behavior whereas negative stimuli can lead to (i.e., punish) a decrease in future occurences. Ultimately, operant conditioning, through consequences, shapes voluntary behavior where actions followed by rewards are repeated and actions followed by punished/negative outcomes diminish.
1515

16-
In this lesson, we will model very simple case of operant conditioning for a neuronal motor circuit used to engage in the navigation of a simple maze. The maze's design will be the rat T-maze and the "rat" will be allowed to move, at a particular point in the maze, in one of four directions (North, South, West, and East). A positive reward will be supplied to our rat neuronal circuit if it makes progress towards the direction of food (placed in the upper right corner of the T-maze) and a negative reward will be provided if fails to make progress/gets stuck, i.e., a dense reward functional will be employed.
17-
18-
16+
In this lesson, we will model very simple case of operant conditioning for a neuronal motor circuit used to engage in the navigation of a simple maze.
17+
The maze's design will be the rat T-maze and the "rat" will be allowed to move, at a particular point in the maze, in one of four directions (North, South, West, and East). A positive reward will be supplied to our rat neuronal circuit if it makes progress towards the direction of food (placed in the upper right corner of the T-maze) and a negative reward will be provided if fails to make progress/gets stuck, i.e., a dense reward functional will be employed. For the exhibit code that goes with this lesson, an implementation of this T-maze environment is provided, modeled in the same style/with the same agent API as the OpenAI gymnasium.
1918

2019
### Reward-Modulated Spike-Timing-Dependent Plasticity (R-STDP)
2120

docs/tutorials/neurocog/dynamic_synapses.md

+18-23
Original file line numberDiff line numberDiff line change
@@ -20,14 +20,10 @@ response (as opposed to a chemical one, e.g., an influx of calcium), such models
2020
to emulate the time-course of what is known as post-synaptic receptor conductance. Note
2121
that these dynamic synapse models will end being a bit more sophisticated than the strength
2222
value matrices we might initially employ (as in synapse components such as the
23-
[DenseSynapse](ngclearn.components.synapses.DenseSynapse)).
24-
25-
Building a dynamic synapse can be done by importing
26-
[ExponentialSynapse](ngclearn.components.synapses.ExponentialSynapse) or
27-
[AlphaSynapse](ngclearn.components.synapses.AlphaSynapse)
28-
from ngc-learn's in-built components and setting them up within a model
29-
context for easy analysis. For the first part of this lesson, we will import
30-
both of these and compare their behavior.
23+
[DenseSynapse](ngclearn.components.synapses.denseSynapse)).
24+
25+
Building a dynamic synapse can be done by importing [ExponentialSynapse](ngclearn.components.synapses.exponentialSynapse) or
26+
[AlphaSynapse](ngclearn.components.synapses.alphaSynapse) from ngc-learn's in-built components and setting them up within a model context for easy analysis. For the first part of this lesson, we will import both of these and compare their behavior.
3127
This can be done as follows (using the meta-parameters we provide in the
3228
code block below to ensure reasonable dynamics):
3329

@@ -49,8 +45,6 @@ dkey, *subkeys = random.split(dkey, 6)
4945
dt = 0.1 # ms ## integration time constant
5046
T = 8. # ms ## total duration time
5147

52-
Tsteps = int(T/dt) + 1
53-
5448
## ---- build a dual-synapse system ----
5549
with Context("dual_syn_system") as ctx:
5650
Wexp = ExponentialSynapse( ## exponential dynamic synapse
@@ -107,10 +101,10 @@ Finally, we seek model the electrical current that results from some amount of n
107101
Thus, for both the exponential and the alpha synapse, the changes in conductance are finally converted (via Ohm's law) to electrical current to produce the final derived variable $j_{\text{syn}}(t)$:
108102

109103
$$
110-
j_{\text{syn}}(t) = g_{\text{syn}}(t) (v(t) - E_{\text{rest}})
104+
j_{\text{syn}}(t) = g_{\text{syn}}(t) (v(t) - E_{\text{rev}})
111105
$$
112106

113-
where $v_{\text{rest}$ (or $E_{\text{rest}}$) is the post-synaptic reverse potential of the ion channels that mediate the synaptic current; this is typically set to $E_{\text{rest}} = 0$ (millivolts; mV)for the case of excitatory changes and $E_{\text{rest}} = -75$ (mV) for the case of inhibitory changes. $v(t)$ is the voltage/membrane potential of the post-synaptic the synaptic cable wires to, meaning that the conductance models above are voltage-dependent (in ngc-learn, if you want voltage-independent conductance, then set `syn_rest = None`).
107+
where $E_{\text{rev}}$ is the post-synaptic reverse potential of the ion channels that mediate the synaptic current; this is typically set to $E_{\text{rev}} = 0$ (millivolts; mV)for the case of excitatory changes and $E_{\text{rev}} = -75$ (mV) for the case of inhibitory changes. $v(t)$ is the voltage/membrane potential of the post-synaptic the synaptic cable wires to, meaning that the conductance models above are voltage-dependent (in ngc-learn, if you want voltage-independent conductance, then set `syn_rest = None`).
114108

115109

116110
### Examining the Conductances of Dynamic Synapses
@@ -121,6 +115,7 @@ To create the simulation of a single input pulse stream, you can write the follo
121115
```python
122116
time_ticks = []
123117
time_labs = []
118+
Tsteps = int(T/dt) + 1
124119
for t in range(Tsteps):
125120
if t % 10 == 0:
126121
time_ticks.append(t)
@@ -194,11 +189,11 @@ expoential and alpha synapse conductance trajectories:
194189
.. table::
195190
:align: center
196191
197-
+---------------------------------------------------------+-----------------------------------------------------------+
198-
| .. image:: ../docs/images/tutorials/neurocog/expsyn.jpg | .. image:: ../docs/images/tutorials/neurocog/alphasyn.jpg |
199-
| :width: 100px | :width: 100px |
200-
| :align: center | :align: center |
201-
+---------------------------------------------------------+-----------------------------------------------------------+
192+
+--------------------------------------------------------+----------------------------------------------------------+
193+
| .. image:: ../../images/tutorials/neurocog/expsyn.jpg | .. image:: ../../images/tutorials/neurocog/alphasyn.jpg |
194+
| :width: 400px | :width: 400px |
195+
| :align: center | :align: center |
196+
+--------------------------------------------------------+----------------------------------------------------------+
202197
```
203198

204199
Note that the alpha synapse (right figure) would produce a more realistic fit to recorded synaptic currents (as it attempts to model
@@ -213,7 +208,7 @@ and often-used conductance model that is paired with spiking cells such as the l
213208
we seek to simulate the following post-synaptic conductance dynamics for a single LIF unit:
214209

215210
$$
216-
\tau_{m} \frac{\partial v(t)}{\partial t} = -(v(t) - E_{L}) - \frac{g_{E}(t)}{g_{L}} (v(t) - E_{E}) - \frac{g_{I}(t)}{g_{L}} (v(t) - E_{I})
211+
\tau_{m} \frac{\partial v(t)}{\partial t} = -\big( v(t) - E_{L} \big) - \frac{g_{E}(t)}{g_{L}} \big( v(t) - E_{E} \big) - \frac{g_{I}(t)}{g_{L}} \big( v(t) - E_{I} \big)
217212
$$
218213

219214
where $g_{L}$ is the leak conductance value for the post-synaptic LIF, $g_{E}(t)$ is the post-synaptic conductance produced by excitatory pre-synaptic spike trains (with excitatory synaptic reverse potential $E_{E}$), and $g_{I}(t)$ is the post-synaptic conductance produced by inhibitory pre-synaptic spike trains (with inhibitory synaptic reverse potential $E_{I}$). Note that the first term of the above ODE is the normal leak portion of the LIF's standard dynamics (scaled by conductance factor $g_{L}$) and the last two terms of the above ODE can be modeled each separately with a dynamic synapse. To differentiate between excitatory and inhibitory conductance changes, we will just configure a different reverse potential for each to induce either excitatory (i.e., $E_{\text{syn}} = E_{E} = 0$ mV) or inhibitory (i.e., $E_{\text{syn}} = E_{I} = -80$ mV) pressure/drive.
@@ -364,10 +359,10 @@ voltage threshold.
364359
.. table::
365360
:align: center
366361
367-
+----------------------------------------------------------------------+
368-
| .. image:: ../docs/images/tutorials/neurocog/ei_circuit_dynamics.jpg |
369-
| :width: 100px |
370-
| :align: center |
371-
+----------------------------------------------------------------------+
362+
+--------------------------------------------------------------------+
363+
| .. image:: ../../images/tutorials/neurocog/ei_circuit_dynamics.jpg |
364+
| :width: 400px |
365+
| :align: center |
366+
+--------------------------------------------------------------------+
372367
```
373368

docs/tutorials/neurocog/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ work towards more advanced concepts.
6060
:maxdepth: 1
6161
:caption: Synapses and Forms of Plasticity
6262

63-
synaptic_conductance
63+
dynamic_synapses
6464
hebbian
6565
stdp
6666
mod_stdp

ngclearn/components/synapses/alphaSynapse.py

+1
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,7 @@ def advance_state(
9797

9898
dgsyn_dt = -g_syn/tau_syn + h_syn # or -g_syn/tau_syn + h_syn/tau_syn
9999
g_syn = g_syn + dgsyn_dt * dt ## run Euler step to move conductance g
100+
g_syn = g_syn * Rscale
100101

101102
i_syn = -g_syn
102103
if syn_rest is not None:

ngclearn/components/synapses/exponentialSynapse.py

+1
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,7 @@ def advance_state(
9292
_out = jnp.matmul(s, weights) ## sum all pre-syn spikes at t going into post-neuron)
9393
dgsyn_dt = -g_syn/tau_syn + _out * g_syn_bar
9494
g_syn = g_syn + dgsyn_dt * dt ## run Euler step to move conductance
95+
g_syn = g_syn * Rscale
9596
i_syn = -g_syn
9697
if syn_rest is not None:
9798
i_syn = -g_syn * (v - syn_rest)

ngclearn/utils/jaxProcess.py

+14-12
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@
77

88
class JaxProcess(Process):
99
"""
10-
The JaxProcess is a subclass of the ngcsimlib Process class. The
11-
functionality added by this subclass is the use of the jax scanner to run a
12-
process quickly through the use of jax's JIT compiler.
10+
The JaxProcess is a subclass of the ngcsimlib Process class. The
11+
functionality added by this subclass is the use of the jax scanner to run a
12+
process quickly through the use of jax's JIT compiler.
1313
"""
1414

1515
def __init__(self, name):
@@ -30,8 +30,9 @@ def _pure(current_state, x):
3030
def watch(self, compartment):
3131
"""
3232
Adds a compartment to the process to watch during a scan
33+
3334
Args:
34-
compartment: The compartment to watch
35+
compartment: the compartment to watch
3536
"""
3637
if not isinstance(compartment, Compartment):
3738
warn(
@@ -51,12 +52,12 @@ def transition(self, transition_call):
5152
"""
5253
Appends to the base transition call to create pure method for use by its
5354
scanner
54-
Args:
55-
transition_call: the transition being passed into the default
56-
process
5755
58-
Returns: this JaxProcess instance for chaining
56+
Args:
57+
transition_call: the transition being passed into the default process
5958
59+
Returns:
60+
this JaxProcess instance for chaining
6061
"""
6162
super().transition(transition_call)
6263
self._process_scan_method = self._make_scanner()
@@ -93,10 +94,11 @@ def scan(self, save_state=True, scan_length=None, **kwargs):
9394
9495
9596
Args:
96-
save_state: A boolean flag to indicate if the model state should be
97-
saved
98-
scan_length: a value to be used to denote the number of iterations
99-
of the scanner if all keyword arguments are passed as ints or floats
97+
save_state: A boolean flag to indicate if the model state should be saved
98+
99+
scan_length: a value to be used to denote the number of iterations of the scanner if all keyword
100+
arguments are passed as ints or floats
101+
100102
**kwargs: the required keyword arguments for the process to run
101103
102104
Returns: the final state of the model, the stacked output of the scan method

0 commit comments

Comments
 (0)