Decoding Finger Velocity from Cortical Spike Trains with Recurrent Spiking Neural Networks
All models were implemented using the stork library for training spiking neural networks and trained using surrogate gradients. Hyperparameters were optimized based on the average validation set performance across all sessions.
The model was designed to achieve a good trade-off between R2 score and computational complexity. It consists of a single recurrent spiking neural network (SNN) layer with 64 LIF neurons. The input layer size matches the number of electrode channels for each monkey. The readout layer consists of 2 leaky integrator units, one each for the X and Y coordinates.
To further reduce the computational complexity of the model, we applied an additional activity regularization loss acting on hidden layer spike trains during training, which penalizes firing rates above 10 Hz. To enforce connection sparsity, we implemented an iterative pruning strategy of synaptic weights during training. At each iteration of the pruning procedure, the N smallest synaptic weights in each weight matrix were set to zero and the network was re-trained. Finally, the model is set to half-precision floating point format after training to reduce the memory footprint and speed up inference.
Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.
Download with git
Clone the git repository for this artifact, and checkout the version's commit
git clone http://github.com/Tansil011019/neural-decoding-RSNN-reproduce.git
# cd into the created directory
git checkout 92cce682d2bec777d61418c7273dfd1c8d8f748c