My main research interests lie at the interface of deep learning, physics and neuroscience. I am more particularly interested in the physics of computation and learning.
I introduced with Yoshua Bengio a novel mathematical framework for gradient-descent-based machine learning that we called "equilibrium propagation" (Eqprop). Compared to the more conventional framework based on automatic differentiation (i.e. "backpropagation"), the benefit of the Eqprop framework is that inference and gradient computation are performed using the same physical laws. By suggesting a path to perform the desired computations (inference and learning) more efficiently, this framework may have implications for the design of novel hardware ("accelerators") for machine learning.
Since its inception, many collaborators and other research groups have contributed to develop the Eqprop framework.
Chapter 2 of my PhD thesis provides a (relatively recent) overview. See also these notes on Eqprop.
|