NeuroFlame: Fast Learning And Modeling Engine
Copyright © 2024 by Melanie Tschiersch. All rights reserved.
Introduction
This package provides an implementation of a recurrent neural network trainer and simulator with Pytorch. Networks can have multiple neural populations with different connectivity types (all-to-all, sparse) and structures (feature selective, spatially tuned, low rank).
Networks models can be trained in a unsupervised or supervised manner just as vanilla RNNs in Pytorch.
Installation
pip install -r requirements.txt
or alternatively using conda (I recommend using mamba’s miniforge, a c++ conda implementation)
mamba install --file conda_requirements.txt
Project Structure
. ├── conf # contains configuration files in yaml format. │ ├── *.yml ├── notebooks # contains ipython notebooks. │ ├── setup.py │ └── *.ipynb ├── org # contains org notebooks. │ ├── /doc/*.org │ └── *.org ├── src # contains source code. │ ├── activation.py # contains custom activation functions. │ ├── connectivity.py # contains custom connectivity profiles. │ ├── decode.py │ ├── lif_network.py # implementation of a LIF network. │ ├── lif_neuron.py │ ├── lr_utils.py # utils for low rank networks. │ ├── network.py # core of the project. │ ├── plasticity.py # contains STP. │ ├── plot_utils.py │ ├── sparse.py # utils for large sparse matrices. │ ├── stimuli.py # contains custom stimuli for behavioral tasks. │ ├── train.py # utils to train networks. └── └── utils.py
Network Dynamics
Dynamics
- Currents
Neuron \(i\) in population \(A\) has a reccurent input \(h^A_i\),
\[ \tau_{syn} \frac{dh_i}{dt}(t) = - h_i(t) + \sum_j J_{ij} h_j(t) \]
or not
\[ h^A_i(t) = \sum_{jB} J^{AB}_{ij} h_j(t) \]
- Rates
The models can have rate dynamics (setting RATE_DYN to 1 in the configuration file):
\[ \tau_A \frac{d r^A_i}{dt}(t) = - r^A_i(t) + \Phi( \sum_{jB} J^{AB}_{ij} h^{AB}_j(t) + h^A_{ext}(t)) \]
Here, \(r_i\) is the rate of unit \(i\) in population \(A\)
otherwise rates will be instantaneous:
\[ r^A_i(t) = \Phi(\sum_{jB} J^{AB}_{ij} h_j(t) + h^A_{ext}(t)) \]
Here \(\Phi\) is the transfer function defined in src/activation.py and can be set to a threshold linear, a sigmoid or a non linear function (Brunel et al., 2003).
Connectivity
The connectivities available in NeuroFlame are described here.
Probability of connection from population B to A:
- Sparse Nets
by default it is a sparse net
\[ P_{ij}^{AB} = \frac{K_B}{N_B} \]
otherwise it can be cosine
\[ P_{ij}^{AB} = ( 1.0 + \Kappa_B \cos(\theta_i^A - \theta_j^B) ) \]
and also low rank
\[ J_{ij}^{AB} = \frac{J_{AB}}{\sqrt{K_B}} with proba. P_{ij}^{AB} * \frac{K_B}{N_B} \] \[ 0 otherwise \]
- All to all
\[ J_{ij}^{AB} = \frac{J_{AB}}{N_B} P_{ij}^{AB} \]
where Pij can be as above.
Network Simulations
Basic Usage
Here is how to run a simulation
# import the network class from src.network import Network # Define repository root repo_root = '/' # Choose a config file conf_file = './conf/conf_EI.yml' # Other parameters can be overwriten with kwargs # kwargs can be any of the args in the config file # initialize model model = Network(conf_file, repo_root, **kwargs) # run a forward pass rates = model()
Parallel Network Simulations
I describe in detail how to run a network simulation and use NeuroFlame to effectively run parallel simulations for different parameters here.
Network Training
Here, I show how to train networks.
Tutorials
Balanced Networks
Here, a tutorial on balanced networks.
Short Term Plasticity
Here, a tutorial on STP with NeuroFlame.
Behavioral Tasks
Here, a tutorial on how to use different stimuli to get the model to perform different behavioral tasks.
Serial Bias
Here, a tutorial on how to get serial bias in a balanced network model.
Contributing
Feel free to contribute.
MIT License Copyright (c) [2023] [A. Mahrach]