“But the various senses incidentally perceive each other’s objects, not as so many separate senses, but as forming a single sense, when there is concurrent perception relating to the same object.”

-Aristotle, de Anima

The questions of how our brain processes the myriad of sensory signals from our environment and how it constructs a unified perceptual experience from those features has long intrigued and baffled scientist: from ancient Greek philosopher Aristotle (1, 2), Gestalt psychologist Max Wertheimer (3, transl.), to modern-day psychologist Anne Treisman (4) and neuroscientist Wolf Singer (5). I share this scientific interest with these individuals, the so-called “binding problem.” The binding problem concerns how we perceptually group sensory features that correspond to the same object. However my scientific interest lies in how we group sensory features across the senses rather than within a single sense.

One particular sensory feature has been demonstrated to facilitate sensory and multisensory binding: temporal correlation, or how dynamic sensory (e.g., visual and auditory) signals change together over time. For example, when we have a conversation with someone, their voice and their mouth movements are bound because their articulatory mouth movements change with (i.e., cause) different features of their speech. This binding improves the representation of the speaker’s voice and allows us to better focus on and understand their speech in difficult listening environments. When these signals are uncorrelated, our brains tend to segregate them. Have you ever seen a movie with dialogue in one language that was dubbed in another? It’s a distracting and sometimes disconcerting perception because the actors’ articulatory movements are not temporally correlated with the dubbed speech.


am Aaron, a neuroscientist and postdoctoral researcher at the University of Rochester under the supervision of Dr. Edmund Lalor. My current research focuses on how the brain tracks the correlation between sensory signals over time, how that signal changes sensory processing, how that change modifies our perceptual experience, and how how that manifests in behavior. More broadly, I’m interested in how selective attention influences binding (and vice-versa) and how it may guide the development of our statistical representation of our multisensory environment. My research combines behavior, computational models, and electroencephalography (EEG) with a combination of well-controlled complex stimuli and more naturalistic stimuli such as human speech to investigate these questions.

I came to Rochester from Nashville, Tennessee; I received my PhD in Hearing and Speech Sciences from Vanderbilt University in the lab of Dr. Mark Wallace. I also have a BS in Neuroscience from King College (now King University) in Bristol, Tennessee.