The lab uses multiple animal models to understand how the architecture and function of sensory system meets the behavioral demands of the animal , which are majorly shaped by the body and its movements.  One of our main goals is to understand how neural circuits transform the visual signals that arrive at the retina into information used by downstream circuits to 'see' and to 'act'. Our other main goal is to understand how the information encoded in visual circuits is shaped by an animal's particular movements. This understanding is pivotal to discovering how visuomotor circuits become optimized to the unique demands of individual animals, including humans. 

The lab's research program is grounded by theories of embodied cognition that suggest perceptions are created through an interaction of the body with the brain. ​The implication of this very reasonable theory is somewhat scandelous: studying the brain, even with the best resolution and technology, will never be enough. This is because brain activity alone doesn’t generate our perceptions, brain activity combined with bodily actions does. So to understand perception, and cognition in general, we have to study the brain and body together - as a single unified, inseperable, system. A second implication is that studying multiple different types of animals, with very different bodies, might be really helpful in understanding the brain/body system. The lab uses tree shrews, and rats, for cross-species studies. These animals are similarly sized, but move and see differently.

To understand the structure and function of the visual system, and how they are shaped by bodily constraints, we use in vivo two-photon imaging, electrophysiology, high-throughput behavioral monitoring and mathematical modeling. In a subset of projects, we use sensors worn by the animal to record the actual environmental inputs that the various biological sensors, like the eye, receive during natural behaviors. We take advantage of recent progress in behavioral quantification techniques to understand how sensory inputs differ during different movement behaviors, and how the brain takes advantage of these differences. Our work will contribute to understanding sensory processing in a manner more faithful to our experience as moving animals. 

The lab is funded by startup funds from Cornell University and from a fellowship by the Howard Hughes Medical Institute. 

Projects

- Neural mechanisms self and externally generated visual information, using two photon imaging
- Neural mechanisms of visual processing in freely moving animals, using wireless electrophysiology
- Behavioral perturbations of visual feedback during natural behavior to understand the brain-body connection
- Modeling the impact of natural behavior on visual inputs using computational models

For all these research programs we are using machine learning guided behavioral analysis, programming in python/Matlab, and closed-loop experimental paradigms. The lab's research program is inspired by ideas in embodied cognition, and our goal is to formulate testable hypotheses about the role of the body, and its movements, in sensory perception.