Skip to content

Commit

Permalink
Fix typos in paper (#113)
Browse files Browse the repository at this point in the history
  • Loading branch information
pitmonticone authored May 3, 2024
1 parent fd89ad9 commit 926f92b
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion paper/sections/delays/Delays.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ In this work we propose a delay learning algorithm that is simple and efficient.

The DDL is, mainly, based on a 1D version of the spatial transformer (STN) network \citealp{JSZK2015}. The STN is a differentiable module that can be added into convolutional neural networks (CNNs) architectures to empower them with the ability to spatially transform feature maps in a differentiable way. This addition leads to CNNs models that are invariant to various spatial transformations like translation, scaling and rotation. Image manipulations are inherently not differentiable, because pixels are a discrete. However, this problem is overcome by the application of an interpolation (for example bi-linear) after the spatial transformation.

The DDL is a 1D version of the spatial transformer where the only transformation done is translation. Translation of a spike along the time dimension can be thought of as a translation of a pixel along the spatial coordinates. The general affine transformation matrix for the 2D case takes the form in the following euqation:
The DDL is a 1D version of the spatial transformer where the only transformation done is translation. Translation of a spike along the time dimension can be thought of as a translation of a pixel along the spatial coordinates. The general affine transformation matrix for the 2D case takes the form in the following equation:

$$ \begin{bmatrix}
sr_1 & sr_2 & t_x\\
Expand Down
2 changes: 1 addition & 1 deletion paper/sections/discussion.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
## Participants and participation
* Insentives, allocating credit
* Incentives, allocating credit
2 changes: 1 addition & 1 deletion paper/sections/new_inh_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Why a more biological inspired model?
- Existance of both ITDs and ILDs for azimuthal sounds localization
- Problem of ITD sensitivity: their duration is in the range of microseonds whereas neuronal dynamics work in the millisecond range.
- Jeffress model existance in mammals is not sure: 3 main critical issues
- Jeffress model existence in mammals is not sure: 3 main critical issues
- axonal delay lines absence in mammalian MSO
- contralateral inhibition role not considered in Jeffress
- peaks of MSO responses outside physiological range: slopes as the encoding part of the curves
Expand Down
4 changes: 2 additions & 2 deletions paper/sections/science.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
* Third, use filter-and-fire neurons (e.g. M = 3 connections per axon).

## Learning delays
Many studies which incoroprate axonal and/or dendritic delays include them as non-learnable parameters (refs) like our base model. Here we explore how these phase delays can be learned through two approaches.
Many studies which incorporate axonal and/or dendritic delays include them as non-learnable parameters (refs) like our base model. Here we explore how these phase delays can be learned through two approaches.

### With dilated convolutions with learnable spacings (DCLS)
First, with DCLS (Hammouamri et al., 2023; Khalfaoui-Hassani et al., 2023).
Expand All @@ -57,7 +57,7 @@ Key points:
* Learns both weights and delays.
* Visualisation of results:
* x - learned delay, y - learned weight.
* Hidden units seperate data spatio-temporally.
* Hidden units separate data spatio-temporally.

### With a differentiable delay layer (DDL)
Second, by introducing a differentiable delay layer.
Expand Down

0 comments on commit 926f92b

Please sign in to comment.