Rodrigo Carrasco Davis - The Value of Learning with Cognitive Control
28-07-2022

Whether a novel task is worth learning, how effort may impact learning, and, consequently, how much effort to allocate towards learning are important questions agents face. Here, we propose a model where a control signal can boost learning speed and improve accuracy while paying a cost when the control is used. The optimal control signal trajectory is computed to maximize the Expected Value of Control (EVC), equivalent to the expected value function, taking the cost of exerting control into account. Changing the task parameters and agent learning capacity shapes the optimal control signal, lending some insights into animal and human behavior when learning a new task. In addition, this control model can be considered as neural modulation, where the control signal acts as a gain modulator changing the transfer function of specific neurons in a neural network, allowing the neural network to surpass its limitations due to its connectivity and learning rules, and potentially give a normative explanation that generates emergent behavior such as attention, multi-tasking, sharing representations, and flow state.

Lorenzo Giambagli - Spectral Tools for training and analysing Neural Networks
14-07-2022

Deep Feedforward Neural Networks (FFNNs) play a central role in the Machine Learning field. They are usually trained in the space of nodes, by adjusting the weights of existing links via suitable optimization protocols. Recently a radically new approach has been proposed [1]. By anchoring the learning process to reciprocal space, the new targets of the optimization process are eigenvectors and eigenvalues of the transfer operators between layers.    Shifting the focus on such fundamental mathematical structures we have been able to understand their pivotal role in training and analyzing NNs. Indeed, while seeking for a small subset of trainable parameters capable of carrying out the training procedure, eigenvalues are what to look for [2]. Choosing them as trainable parameters allows the optimizer to exploit the parallel adjustment of several weights, the ones underlined by the corresponding eigenvector, and therefore made their after-training interpretation possible.    Firstly, eigenvalues magnitude after the training procedure has occurred has been empirically and heuristically proven being a proxy their relevance in the optimization process. Indeed, a precise correspondence between nodes and eigenvalues can be established, leading to a novel pruning procedure. The nodes related with low magnitude eigenvalues can be removed leading to a fast and easy implemented network compression algorithm [3].    Secondly, accounting for eigenvalues in the optimization process, it is possible to dynamically train sparse network [2]. Sparsity constrains in the direct space implies that certain weights got filtered under a mask, leading to a gradient equal to zero during the training procedure. Working in the reciprocal space, however, allows masked weights to still be modified, due to the non-local effect of the eigenvalues. Such approach leads to sparse networks whose topology is not fixed to the starting one, resulting in a much more efficient training.