Research Projects
Reverse engineering the brain
The brain creates a coherent interpretation of the external world based on input from its sensory system. Yet data from the senses are unreliable and confused. How does the brain synthesise its percepts? Recent psychophysical experiments indicate that humans perform near-optimal Bayesian Inference in a wide variety of cognitive tasks, such as motion perception, decision making, or motor control. In Bayesian Inference – a powerful mathematical framework – the likelihood of a particular state of the world being true is calculated based not only on sensory input signals but also on prior knowledge about the external world that the system has already learned. The Bayesian framework has also been shown to be ideal for fusing information from different sensory modalities and is robust to errors in individual sensors. If we could understand, in an engineering sense, how the brain accomplishes this, then we could apply this knowledge to the electronic sensors we build.
Neurones in the brain use action potentials (spikes) to communicate with each other. From calculations based on the energy consumption of the brain, it has been estimated that, on average, each neurone fires only one spike per second, although individual sensory neurones can fire close to 1000 spikes per second. The question of how Bayesian Inference can be implemented using spiking neurones with such slow communication rates is intriguing. In the past five years a dozen of papers have been published showing glimpses of how this could be achieved. Taking a similar approach to electronic sensor networks would minimise the bandwidth needed for communication and therefore minimise power consumption.
The delay between the firing of a spike by one neurone to the reception of that spike by another neurone is typically in the range of one to forty milliseconds. These propagation delays have been largely ignored by both the neurophysiology and the neural computation communities. It has only very recently been recognised that the incorporation of delays enables a whole new class of computational systems, termed reservoir computing. Reservoir computing underlies quite possibly the computational architecture by which the brain performs Bayesian Inference.
To read out such reservoirs of neurones, propagation delays from neurones in the reservoir to the read-out neurone have to be precisely tuned. Some very limited evidence from neurophysiology for adaptive delays has begun to emerge, but in general delay adaptation in the brain has hardly been studied. The demonstration of a learning rule for adaptive delays in the brain would constitute a break-through discovery in neuroscience.
The BENS will study the neurophysiological and computational neuroscience implications of the Bayesian Inference framework and aim to discover how it is implemented in the brain.
Neuromorphic Engineering
“Read this aloud and your inner ear, by itself, will be carrying out at least the equivalent of a billion floating-point operations per second, about the workload of a typical game console. The inner ear together with the brain can distinguish sounds that have intensities ranging over 120 decibels, from the roar of a jet engine to the rustle of a leaf, and it can pick out one conversation from among dozens in a crowded room. It is a feat no artificial system comes close to matching. But what’s truly amazing is the neural system’s efficiency. Consuming about 50 watts, that game console throws off enough heat to bake a cookie, whereas the inner ear uses just 14 microwatts and could run for 15 years on one AA battery. If engineers could borrow nature’s tricks, maybe they could build faster, better and smaller devices that don’t literally burn holes in our pockets.” – R. Sarpeshkar, IEEE Spectrum, May 2006
Efficient, parallel, low-power computation is a hallmark of brain computation and the goal of neuromorphic engineering. The focus of this project is to design, implement and test the most accurate, electronic, very large scale integrated (VLSI) circuit model of the cochlea and its associated auditory signal processing. In creating this electronic model, we will develop new schemes for parallel, low-power, auditory signal processing that would be impossible to study in any other way. The cochlea model will accurately simulate the fluid dynamic properties of the biological cochlea and will now include active gain control and active quality factor control of the cochlea partition. It will also implement the processing performed by the sensory transducers and the spiking neurons of the auditory nerve. We will design, implement, and test a spiking neuron chip capable of simulating the response properties of many of the types found in biology and its parameters will be programmable.
Neuromorphic Engineering now is capable of interfacing all neuromorphic circuits that adhere to the, now standard, Address Event Representation protocol to each other or a computer. This will not only include our circuits, but also those designed by other neuromorphic engineers. The interface is furthermore programmable, enabling it to perform computation on the spike trains as they pass through the interface from one chip to another. This tool enables advanced spike based signal processing systems. We will develop models and circuits that demonstrate the advantages of spike based processing over conventional analogue and digital signal processing for certain applications. We will use the auditory pathway as our inspiration for these systems.
Current Postgraduate Research Projects
Please see our current PhD projects page for a list of projects.



