Skip to content

Commit

Permalink
small edits
Browse files Browse the repository at this point in the history
  • Loading branch information
thesamovar committed Jan 23, 2025
1 parent 6737144 commit f4ed5b1
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 5 deletions.
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ downloads:


+++ {"part": "abstract"}
Neuroscientists are increasingly initiating large-scale collaborations which bring together tens to hundreds of researchers. At this scale, such projects can tackle large-scale challenges and engage a wide range of participants. Inspired by projects in pure mathematics, we set out to test the feasibility of widening access to such projects even further, by running a massively collaborative project in computational neuroscience. The key difference, with prior neuroscientific efforts, being that our entire project (code, results, writing) was public from the outset, and that anyone could participate. To achieve this, we launched a public Git repository, with code for training spiking neural networks to solve a sound localisation task via surrogate gradient descent. We then invited anyone, anywhere to use this code as a springboard for exploring questions of interest to them, and encouraged participants to share their work both asynchronously through Git and synchronously at monthly online workshops. Hoping that the resulting range of participants, would allow us to make discoveries that a single team would have been unlikely to find. At a scientific level, our work investigated how a range of biologically-relevant parameters, from time delays to membrane time constants and levels of inhibition, could impact sound localisation in networks of spiking units. At a more macro-level, our project brought together 31 researchers from multiple countries, provided hands-on research experience to early career participants, and opportunities for supervision and teaching to later career participants. While our scientific results were not groundbreaking, our project demonstrates the potential for massively collaborative projects to transform neuroscience.
Neuroscientists are increasingly initiating large-scale collaborations which bring together tens to hundreds of researchers. At this scale, such projects can tackle large-scale challenges and engage a wide range of participants. Inspired by projects in pure mathematics, we set out to test the feasibility of widening access to such projects even further, by running a massively collaborative project in computational neuroscience. The key difference, with prior neuroscientific efforts, being that our entire project (code, results, writing) was public from the outset, and that anyone could participate. To achieve this, we launched a public Git repository, with code for training spiking neural networks to solve a sound localisation task via surrogate gradient descent. We then invited anyone, anywhere to use this code as a springboard for exploring questions of interest to them, and encouraged participants to share their work both asynchronously through Git and synchronously at monthly online workshops. Our hope was that the resulting range of participants would allow us to make discoveries that a single team would have been unlikely to find. At a scientific level, our work investigated how a range of biologically-relevant parameters, from time delays to membrane time constants and levels of inhibition, could impact sound localisation in networks of spiking units. At a more macro-level, our project brought together 31 researchers from multiple countries, provided hands-on research experience to early career participants, and opportunities for supervision and teaching to later career participants. While our scientific results were not groundbreaking, our project demonstrates the potential for massively collaborative projects to transform neuroscience.
+++

# Introduction
Expand Down
9 changes: 5 additions & 4 deletions paper/sections/TCA/analysis.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,26 +12,27 @@ To explore the spiking activity of the hidden units in our simple, neural networ
* A neuron factor - describing how strongly associated each neuron is with each component.
* A time factor - indicating how the activity of each component changes within a trial.
* A trial factor - denoting how active each component is on each trial.

Notably the number of components, termed the rank, is a hyperparameter which requires some consideration.

### Methods
To acquire the necessary data we trained the basic model, and recorded the spiking activity of it's hidden layer. We then smoothed each unit's activity over time using a Gaussian kernel. Then applied nonnegative tensor component analysis using the [Tensortools library](https://github.com/neurostatslab/tensortools).

### Results
As a first pass, we began by recording the spikes from a model during training and running TCA with a single rank [](#rank1). This single component contained a subset of the hidden units (middle panel), which were more active during trials with high IPDs (left panel) and tended to have sinusoidal-like patterns of activity (right panel).
As a first pass, we began by recording the spikes from a model during training and running TCA with a single rank ([](#rank1)). This single component contained a subset of the hidden units (middle panel), which were more active during trials with positive IPDs (left panel) and tended to have sinusoidal-like patterns of activity (right panel).

```{figure} sections/TCA/rank-1.png
:label: rank1
:width: 100%
Applying TCA, with a single rank, to the spikes collected from a single neural network's hidden layer during training. Left: this component's activation (y-axis) across a subset of training trials (x-axis), each trial is coloured by it's IPD from low (blue) to high (yellow). Middle: of the network's 30 hidden units (x-axis) only a subset are strongly associated (y-axis) with this component. Right: the activity of this component (y-axis) over time (x-axis) within trials resembles a sinusoid.
Applying TCA, with a single rank, to the spikes collected from a single neural network's hidden layer during training. Left: this component's activation (y-axis) across a subset of training trials (x-axis), each trial is coloured by it's IPD from $-\pi/2$ (blue) to $\pi/2$ (yellow). Middle: of the network's 30 hidden units (x-axis) only a subset are strongly associated (y-axis) with this component. Right: the activity of this component (y-axis) over time (x-axis) within trials resembles a sinusoid.
```

Next, we took a trained network, recorded it's spikes in response to a range of IPDs and then used TCA to identify 6 components [](#rank6). While some units were associated with multiple components (middle column) and all of the component's temporal factors were generally sinusoid-like (right column), each components activity was strongly modulated by the trial's IPD (left column). For example, component 4 was strongly active on trials with a low IPD and virtually inactive on trials with a high IPD. While, component 5 showed the opposite behaviour. Taken together, this suggests that some of network's hidden units are selectively responsive to low/high IPD input signals.
Next, we took a trained network, recorded it's spikes in response to a range of IPDs and then used TCA to identify 6 components ([](#rank6)). While some units were associated with multiple components (middle column) and all of the component's temporal factors were generally sinusoid-like (right column), each components activity was strongly modulated by the trial's IPD (left column). For example, component 4 was strongly active on trials with a negative IPD and virtually inactive on trials with a positive IPD. While, component 5 showed the opposite behaviour. Taken together, this suggests that some of network's hidden units are selectively responsive to particular IPD input signals.

Finally, we experimented with training multiple networks, analysing their spiking activity with TCA and comparing the results. Our preliminary analysis of these data can be found [here](../../../research/TCA-analysis.ipynb).

```{figure} sections/TCA/rank-6.png
:label: rank6
:width: 100%
TCA analysis, with 6 ranks, of a trained network's spiking in response to a range of IPDs. Each row shows one of six identified components. Left column: each components trial factor - i.e. it's activation (y-axis) across a set of test trials (x-axis). Each test trial is coloured by it's IPD from low (blue) to high (yellow). Middle column: of the network's 30 hidden units (x-axis) slightly different subsets are associated with each component. Right: the activity of each component (y-axis) over time (x-axis) within trials.
TCA analysis, with 6 ranks, of a trained network's spiking in response to a range of IPDs. Each row shows one of six identified components. Left column: each components trial factor - i.e. it's activation (y-axis) across a set of test trials (x-axis). Each test trial is coloured by it's IPD from $-\pi/2$ (blue) to $\pi/2$ (yellow). Middle column: of the network's 30 hidden units (x-axis) slightly different subsets are associated with each component. Right: the activity of each component (y-axis) over time (x-axis) within trials.
```

0 comments on commit f4ed5b1

Please sign in to comment.