year: 2021
paper: https:<//proceedings.neurips.cc/paper/2021/file/d79c8788088c2193f0244d8f1f36d2db-Paper.pdf, Sem scholar version
website:
code:
connections: chaos theory, astrocyte, liquid state machine, reservoir computing, critical state, biologically inspired
This paper introduces a new approach to improving the performance of liquid state machines (LSMs), a type of brain-inspired machine learning model, by incorporating astrocyte-modulated plasticity. Here’s a summary addressing your specific questions:
Astrocytes in the brain:
Astrocytes are non-neuronal cells in the brain that have been traditionally overlooked but are now recognized as playing key roles in modulating brain networks. They integrate the activity of thousands of synapses and provide feedback to neurons by affecting their synaptic plasticity. Astrocytes are involved in modifying synaptic plasticity, facilitating switching between cognitive states, and tuning brain networks to operate near critical phase transitions.
Astrocyte modeling and role in this paper:
The authors develop a leaky-integrate-and-modulate (LIM) astrocyte model. This model:
Integrates activity from input and liquid neurons
Provides global feedback to spike-timing-dependent plasticity (STDP)
Helps organize the liquid state machine dynamics around a critical branching factor associated with the edge of chaos Modulates the STDP depression learning rate based on network activity The astrocyte model helps improve LSM performance by self-organizing the network dynamics near criticality, which is computationally optimal.
Excitation vs. inhibition:
The paper models both excitatory and inhibitory neurons in the liquid, with a ratio of 80% excitatory to 20% inhibitory neurons. This balance of excitation and inhibition is crucial for generating the desired network dynamics. The inhibitory connections have negative weights, while excitatory connections have positive weights. The presence of both types of neurons helps create the necessary conditions for complex, edge-of-chaos dynamics.
“Edge of chaos dynamics”:
The “edge of chaos” refers to a critical state between order and chaos in dynamical systems. In the context of this paper:
It’s associated with a critical branching factor slightly above 1.0
At this point, the network achieves optimal computational capabilities
It’s characterized by a balance between stability and flexibility in information processing
The astrocyte model helps tune the network to operate near this critical point
Operating at the edge of chaos allows the network to have both learning and memory capabilities, maximizing information processing capacity and optimizing dynamical range.
LSMs, their limitations, and the critical state
LSMs avoid training via backpropagation by using a sparse, recurrent, spiking neural network (liquid) with fixed synaptic connection weights to project inputs into a high dimensional space from which a single neural layer can learn the correct outputs. Yet, these advantages over deep networks come at the expense of 1) sub-par accuracy and 2) extensive data-specific hand-tuning of liquid weights. Interestingly, these two limitations have been targeted by several studies that tackle one or the other, but not both.
As a general heuristic, LSM accuracy is maximized when LSM dynamics are positioned at the edge-of-chaos and specifically in the vicinity of a critical phase transition that separates: 1) the sub-critical phase, where network activity decays, and 2) the super-critical (chaotic) phase, where network activity gets exponentially amplified. Strikingly, brain networks have also been found to operate near a critical phase transition that is modeled as a branching process. Current LSM tuning methods organize network dynamics at the critical branching factor by adding forward and backward communication channels on top of the liquid. This, however, results in significant increases in training complexity and violates the LSM’s brain-inspired self-organization principles. For example, these methods lack local plasticity rules that are widely observed in the brain and considered a key component for both biological and neuromorphic learning.
A particular local learning rule, spike-timing dependent plasticity (STDP), is known to improve LSM accuracy. Yet, current methods of incorporating STDP into LSMs further exacerbate the limitations of data-specific hand-tuning as they require additional mechanisms to compensate for the STDP-imposed saturation of synaptic weights. This signifies the scarcity of LSM tuning
methods that are both computationally efficient and data-independent.