Invited Speakers

Mehdi Khamassi

Some applications of the model-based / model-free reinforcement learning framework to Neuroscience and Robotics


SHORT BIO:
Mehdi Khamassi is a permanent research scientist employed by the CNRS and working at the Institute of Intelligent Systems and Robotics,  Sorbonne Université, Paris, France. He as been trained with a double  background in machine learning / robotics and experimental / computational neuroscience. His main research interests include  decision-making, reinforcement learning, performance monitoring,  meta-learning, and reward signals in social and non-social contexts.

ABSTRACT:
The model-free reinforcement learning (RL) framework, and in particular Temporal-Difference learning algorithms, have been successfully applied to Neuroscience since about 20 years. It can account for dopamine reward prediction error signals in simple Pavlovian and single-step  decision-making tasks. However, more complex multi-step tasks employed  both in Neuroscience and Robotics illustrate their computational  limitations.
In parallel, the last 10 years have seen a growing interest in  computational models for the coordination of different types of learning  algorithms, e.g. model-free and model-based RL. Such a coordination  enables to explain more diverse behaviors and learning strategies in  humans, monkeys and rodents. Such coordination also seems a promising  way to endow robots (and more generally autonomous agents) with the  ability to autonomously decide which learning strategy is appropriate in  different encountered situations, while at the same time permitting to  minimize computation cost. In particular, I will show some results in  robot navigation and human-robot interaction tasks where the robot  learns (1) which strategy (either model-free or model-based) is the most  efficient in each situation, and (2) which strategy has the lowest  computation cost/time when both strategies offer the same performance in  terms of reward obtained from the environment.

These robotic applications of a neuro-inspired framework for the  coordination of model-based and model-free reinforcement learning  provide new insights into the dynamics of learning in more realistic,  noisy, embodied situations, which can bring some feedback and novel  hypotheses for Neuroscience and Psychology.

 

Bruno Poucet

Goal-directed spatial navigation and the hippocampus

SHORT BIO:

Bruno Poucet is currently Director of Research at CNRS) and lead the Laboratory of Cognitive Neuroscience in Marseille. This Center, which hosts about 70 people, aims to understand the neural bases of cognitive processing. Research is conducted on both human and animals (rodents) using various techniques (electrophysiology, optogenetics, TMS, EEG, fMRI).

His personal research interests bear on the neural bases of spatial cognition in animals, using a multidisciplinary approach to the problem of how animals process spatial information to navigate in space. Emphasis is put on the role of the hippocampus and several neocortical areas (parietal, prefrontal, retrosplenial, striate cortices) thought to subserve distinct functions in spatial processing. Our studies of unit activity in freely moving rats have shown several important properties of "place cells" in the hippocampus. We are now investigating the nature of the coupling between the spatial firing of place cells and navigation performance to establish whether place cells participate in an important way in the calculation of paths.

ABSTRACT:

In this talk, I will summarize results on the properties of hippocampal place cells in the rat, which emphasize their participation to a neural network dedicated to the coding of spatial information and goal-directed navigation. This network likely includes other types of spatial cells such as head direction cells and grid cells, but also prefrontal cortical neurons whose one function is to code for spatial goals. Our recent data highlight the importance of the communication between the two structures.

Online user: 1 RSS Feed