Research Overview

Spatial Hearing

Whether crossing the road or focusing on someone at a cocktail party, we need to localise where sounds are coming from in the world. In hearing, this is challenging because we have only two ears from which to compute sound source location. However, by comparing signals between the ears and monitoring changes in sounds as our heads move, we can find and localise sounds accurately.

My work focuses on the coordinate frames of sound localization, and specifically how the brain integrates auditory signals with non-auditory information to represent sounds relative to the head, and within the world more broadly. Such processes are critical for building an understanding of our auditory environment that remains stable, even when our own movement induces dramatic changes in acoustic features arriving at the ear.

Perceptual Invariance

We can recognize a word spoken by different talkers or someone’s voice across many different words. Likewise we can hear in quiet, clean conditions and in a variety of background noises. This ability - often called ‘Perceptual Invariance’ or ‘Perceptual Constancy’ - is central to our everyday experience, but remains fiendishly difficult to understand.

Computationally, perceptual invariance requires that the brain constructs robust representations of objects across variable and noisy sensory inputs (e.g. sounds). My research aims to establish how such representations arise in the responses of neurons in the auditory system as listeners identify sounds across variations in ‘orthogonal’ properties (i.e. those properties that vary without affecting identity, such as sound location) and in background noise. Additionally, I am interested in the features of sound that are critical for computing sound identity and disrupt perceptual constancy when lost.

Audiovisual Integration

A critical function of the central nervous system is to bring together information sampled through different sensory modalities. Multisensory integration (also called ‘sensor fusion’ in engineering) allows us to improve the accuracy and reliability of measurements of the outside world, while also building representations of objects that wouldn’t be possible from a single modality (e.g. world-centered sound location).

In studying multisensory processing, I have specifically investigated at the integration of auditory and visual information. This includes how memories are formed that are specific for audiovisual combinations, the neural circuits that bring together signals processed by visual and auditory cortex, and how conflicts between modalities are regulated across cortical networks. As our understanding grows, it is becoming clear that audiovisual integration is a parallel process occurring at multiple levels across the brain, and represents a general organizing principle of neural systems.