While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we first demonstrate that thin dendritic branches are well suited to implementing contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to thin dendrites can solve linearly non-separable learning problems with a Hebbian, error-modulated learning rule. Finally, we demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.
### 15 May 2023: Foundation Models for Robustness to Distribution Shifts<br>
author and presenter: Ananya Kumar (Stanford)
...
...
@@ -83,6 +71,14 @@ Ananya is a final year PhD student at Stanford University advised by Percy Liang
## Past Meetings
### 17 April 2023: Dendritic modulation enables multitask representation learning in hierarchical sensory processing pathways<br>
While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we first demonstrate that thin dendritic branches are well suited to implementing contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to thin dendrites can solve linearly non-separable learning problems with a Hebbian, error-modulated learning rule. Finally, we demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.
### 20 March 2023: Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels<br>
Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey, NeurIPS 2020<br>