Skip to content

Commit

Permalink
Clean notebooks (#111)
Browse files Browse the repository at this point in the history
* Update 4-Quick_Start.ipynb

* Update 2-Questions.md

* Update Workshop_1_Write_Up.md

* Update Solving_problem_with_delay_learning.ipynb

* Update Workshop_1_Write_Up.md
  • Loading branch information
pitmonticone authored May 3, 2024
1 parent bae3dc1 commit 69d2f42
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 13 deletions.
2 changes: 1 addition & 1 deletion research/2-Questions.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Starting point:
- _Mingxuan_: I'd like to experiment with this idea.
* Which level of biological realisms does the learning need to comply to? Which observables are available at the synapse as input to a learning rule?
- _Danish_: Very interesting question that I would like to explore.
* Level of biological realism and impact on performace - Dale law.
* Level of biological realism and impact on performance - Dale law.
- _Jose_: Looking into this
* Distribution of inhibitory and excitatory neurons using Dale's law
- _Sara_: First exploratory and interesting results, warrant further investigation
2 changes: 1 addition & 1 deletion research/4-Quick_Start.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
"metadata": {},
"source": [
"TODO:\n",
"* Add a few lines of doccumentation per function (Inputs and outputs)"
"* Add a few lines of documentation per function (Inputs and outputs)"
]
},
{
Expand Down
18 changes: 9 additions & 9 deletions research/Solving_problem_with_delay_learning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
"\n",
"\n",
"Here, I showcase the solution to the sound localization problem using only differentiable delays. For this project, this is the fruit of the work done on differentiable delays. \n",
"I truly grateful for everyone that I interacted with in this project. For me, it was a nice experience and I hope we can do similar projectes to tackle different projects in the future."
"I truly grateful for everyone that I interacted with in this project. For me, it was a nice experience and I hope we can do similar projects to tackle different projects in the future."
]
},
{
Expand All @@ -34,7 +34,7 @@
"\n",
"Classes:\n",
" Delaylayer: defines the delaylayer object\n",
" Delayupdate: defines the object responisble for the application of surrogate delay updates\n",
" Delayupdate: defines the object responsible for the application of surrogate delay updates\n",
"\"\"\""
],
"metadata": {
Expand All @@ -52,7 +52,7 @@
"output_type": "execute_result",
"data": {
"text/plain": [
"'Solving the sound localization problem with only differentiable delays (non-spiking)\\n\\nFunctions:\\n input_signal: outputs poisson generated spike trains for a given input IPD\\n get_batch: generate a fixed size patch of input-targets from the input-signal function\\n snn_sl: defines the synaptic integration function\\n analyse: a visualization function for the results of the training\\n\\nClasses:\\n Delaylayer: defines the delaylayer object\\n Delayupdate: defines the object responisble for the application of surrogate delay updates\\n'"
"'Solving the sound localization problem with only differentiable delays (non-spiking)\\n\\nFunctions:\\n input_signal: outputs poisson generated spike trains for a given input IPD\\n get_batch: generate a fixed size patch of input-targets from the input-signal function\\n snn_sl: defines the synaptic integration function\\n analyse: a visualization function for the results of the training\\n\\nClasses:\\n Delaylayer: defines the delaylayer object\\n Delayupdate: defines the object responsible for the application of surrogate delay updates\\n'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
Expand Down Expand Up @@ -149,15 +149,15 @@
"NB_EPOCHS = 20000 \n",
"BATCH_SIZE = 200\n",
"device = device = torch.device(\"cpu\")\n",
"\"\"\"Delay paramters and functions\"\"\"\n",
"\"\"\"Delay parameters and functions\"\"\"\n",
"MAX_DELAY = 20 # Assumed to be in ms\n",
"NUMBER_INPUTS = 2 # Number of input spikes trains corresponding to the two ears\n",
"EFFECTIVE_DURATION = MAX_DELAY * 3 + int(np.round(DURATION / DT))\n",
"TAU, TAU_DECAY, TAU_MINI, TAU_DECAY_FLAG = 40, 0.005, 5, True # Time constant decay settings\n",
"ROUND_DECIMALS = 4 # For the stability of the delay layer\n",
"FIX_FIRST_INPUT = True # Fix the first input in delay learning\n",
"SHOW_IMAGE = True # Visualization of the whole raster plot or target spikes\n",
"SYNAPSE_TYPE = 1 # 0 for multiplicaitve, 1 for subtractive\n",
"SYNAPSE_TYPE = 1 # 0 for multiplicative, 1 for subtractive\n",
"LR_DELAY = 2 # Learning rate for the differentiable delays"
]
},
Expand Down Expand Up @@ -251,7 +251,7 @@
"\n",
" Returns:\n",
" inputs(torch.Tensor, (BATCH_SIZE, NUMBER_CLASSES, NUMBER_INPUTS, EFFECTIVE_DURATION)): A batch of input spike trains\n",
" targets(torch.Tensor, (BATCH_SIZE, NUMBER_CLASSES)): A batch of one hot incoded targets\n",
" targets(torch.Tensor, (BATCH_SIZE, NUMBER_CLASSES)): A batch of one hot encoded targets\n",
" \"\"\"\n",
" inputs, targets = [], []\n",
" for _ in range(BATCH_SIZE):\n",
Expand Down Expand Up @@ -329,7 +329,7 @@
" self.constant_delays(boolen): the initialized delays all have a constant value\n",
" self.constant_value(int): the initialized delays constant value\n",
" self.lr_delay(float): learning rate for the differentiable delays\n",
" self.effective_duration(int): length of the agumented input duration in ms\n",
" self.effective_duration(int): length of the augmented input duration in ms\n",
" self.delays_out(int, (NUMBER_INPUTS, NUMBER_CLASSES)): the initialized delay array\n",
" self.optimizer_delay(torch.optim): the backprop optimizer for the differentiable delays\n",
" \"\"\"\n",
Expand Down Expand Up @@ -1985,7 +1985,7 @@
"# torch.manual_seed(0)\n",
"# torch.autograd.set_detect_anomaly(True)\n",
"delay_layer = DelayLayer(lr_delay=LR_DELAY, constant_delays=False, constant_value=0, max_delay_in=MAX_DELAY) # A delay layer object\n",
"delay_fn = DelayUpdate.apply # An object that medisate the application of surrogate updates to the differentiable delays\n",
"delay_fn = DelayUpdate.apply # An object that mediate the application of surrogate updates to the differentiable delays\n",
"optimizer_delay_apply = delay_layer.optimizer_delay \n",
"log_softmax_fn = nn.LogSoftmax(dim=1)\n",
"loss_fn = nn.NLLLoss()\n",
Expand All @@ -1999,7 +1999,7 @@
" X_TRAIN.append(x_local)\n",
" output = snn_sl(x_local) # Apply the synaptic integration with delays\n",
" target = []\n",
" for i in range(BATCH_SIZE): # Convert the one hot incoded targets to whole numbers for the cross-entropy function\n",
" for i in range(BATCH_SIZE): # Convert the one hot encoded targets to whole numbers for the cross-entropy function\n",
" target.append(np.where(y_local[i] > 0.5))\n",
" target = torch.FloatTensor(np.array(target)).squeeze().to(torch.int64)\n",
" Y_TRAIN.append(target)\n",
Expand Down
4 changes: 2 additions & 2 deletions research/Workshop_1_Write_Up.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Zachary Friedenberger (Canada, PhD)
1. How do membrane time constants affect network performance?
2. If we train the membrane time constants - what distribution emerges?
3. Do heterogenous time constants improve performance?
4. What happens if we allow seperate time constants per layer?
4. What happens if we allow separate time constants per layer?

#### Results
1. Model performance decreases as the membrane time constant increases
Expand Down Expand Up @@ -81,5 +81,5 @@ Regularize network spiking (e.g. using a lower and upper bound)
#### Discussion
Before lunch and at the end of the day we regrouped to share our progress. For the latter discussion we were joined by Alessandro Galloni (USA, Postdoc) and Boris Marin (Brazil, Assistant Professor).

Based on results from the time constants team, we agreed that we should all use a shorter time constant when training the networks and there was a general consensus that we should base our analysis on networks trained until convergence. We agreed that the breakout room format worked well (though 5 may be a resonable limit on team size), and were pleased to hear that those not coding themselves learnt a lot from following along. Looking ahead we decided that we should meet on a monthly basis (starting in September) and agreed that a local meetup format would be great. Ideas for future work included: conductance-based synpases, heterogeneity (e.g. activation functions) and work on a reinforcement learning version of the task.
Based on results from the time constants team, we agreed that we should all use a shorter time constant when training the networks and there was a general consensus that we should base our analysis on networks trained until convergence. We agreed that the breakout room format worked well (though 5 may be a reasonable limit on team size), and were pleased to hear that those not coding themselves learnt a lot from following along. Looking ahead we decided that we should meet on a monthly basis (starting in September) and agreed that a local meetup format would be great. Ideas for future work included: conductance-based synapses, heterogeneity (e.g. activation functions) and work on a reinforcement learning version of the task.

0 comments on commit 69d2f42

Please sign in to comment.