Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Congratulations to Professor Colin Akerman and members of his team who have been awarded the inaugural Sejnowski-Hinton Prize for their groundbreaking 2016 paper “Random synaptic feedback weights support error backpropagation for deep learning”

Potential synaptic circuitry underlying feedback alignment, shown for a single hidden unit. There are many possible configurations that could support learning with feedback alignment, or algorithms like it, and it is this structural flexibility that is important.

Professor Akerman and Dr Timothy Lillicrap - a postdoctoral scientist in the Department - collaborated with colleagues from the University of St Andrews and Toronto on this work. The award of the Sejnowski-Hinton Prize recognises the paper’s discovery of “feedback alignment”, which has had a major impact within the scientific community and helped to establish a new sub-field of biologically-plausible learning rules. For decades, scientists have tried to understand how the brain learns using only the information available at individual synapses, without relying on the transfer of precise, global error signals. The award-winning paper showed that layers of interconnected neurons can learn effectively even when the feedback they receive is fixed and random, instead of being a perfect mirror of the forward connections - as required in the backpropagation algorithm used to train artificial neural networks. Remarkably, the authors revealed that a network’s forward synaptic connection weights naturally align with these random feedback signals over the course of learning. This insight provided the first concrete, biologically-grounded solution to the long-standing “weight transport problem” - how real neurons could follow error gradients without needing detailed, non-local information.

About the Prize: The Sejnowski-Hinton Prize is rooted in collaboration and generosity. In 1983, Geoff Hinton and Terry Sejnowski made a pact: if only one of them received a Nobel Prize for their work on Boltzmann machines, they would share the prize money. In 2024, when John Hopfield and Geoff Hinton were awarded the Nobel Prize in Physics - with Boltzmann machines cited among the contributions - Terry declined his share. Instead, Geoff donated the funds to establish the Sejnowski-Hinton Prize, honoring their longstanding partnership and commitment to advancing theories of the brain.

For the full paper: Random synaptic feedback weights support error backpropagation for deep learning