Skip to main content

Research and Publications

The goal of our lab is to define how neurons from different cortical areas interact to realize our perception of shape and motion.

We study the brain of the rhesus macaque, recording action potentials from neurons that span the entire cortico-visual hierarchy, from V1, V2, V4, MT and inferotemporal cortex (IT).

We believe that the best explanation for visual processing is mathematical – thus we work to ensure that all of our results can be implemented in computational models like deep neural networks.

DALL_E_image of a monkey and a robot pointing a t aplot 
Art by DALL-E

To achieve this goal, we need animals to perform behavioral tasks, and so we use modern techniques (including computer-based automated systems) to train the animals humanely and efficiently. We record from their brains using chronically implanted microelectrode arrays, which yield large amounts of data quickly, and sometimes also using single electrodes for novel exploratory projects (i.e. our moonshot division!). While recording, we also can use activity manipulation techniques (like cortical cooling, optogenetics and chemogenetics) to affect cortical inputs to the neurons under study, and establish results that are causal, not just correlational.

Our experimental work is influenced by machine learning. We use a variety of deep neural network types (including convolutional, recurrent and generative adversarial) to test preliminary hypotheses, interpret results and generate interesting stimuli for biology-based experiments. Our programming languages of choice are Matlab and Python.

Solving the problem of visual recognition at the intersection of visual neuroscience and machine learning will yield applications that will improve automated visual recognition in fields like medical imaging, security and self-driving vehicles. But just as importantly, it will illuminate how our inner experience of the visual world comes to be.  

Our papers include the following:

  1. Mueller KN, Carter MC, Kansupada JA, and Ponce CR. Macaques recognize features in synthetic images derived from ventral stream neurons, PNAS, 2023
  2. Wang B and Ponce CR, On the level sets and invariance of neural tuning landscapes, NeurIPS 2022 Workshop
  3. Wang B and Ponce CR, Tuning landscapes of the ventral stream, Cell Reports, 2022
  4. Wang B and Ponce CR, High-performance Evolutionary Algorithms for Online Neuron Control, Genetic and Evolutionary Computation Conference, 2022 (DOI: 10.32470/CCN.2022.1034-0)
  5. Bardon, Xiao, Ponce, Livingstone and Kreiman, Face neurons encode nonsemantic features, PNAS, 119 (16) e2118705119 (2022).
  6. Rose, Johnson, Wang and Ponce, Visual prototypes of the ventral stream are attuned to complexity and gaze behavior, Nature Communications, 12, 6723, 2021.
  7. Wang B and Ponce CR, A Geometric Analysis of Deep Generative Image Models and Its Applications. In Proc. International Conference on Learning Representations, 2021.
  8. Arcaro MJ, Ponce CR, Livingstone M. The neurons that mistook a hat for a face. Elife. 2020; 9:e53798. Published 2020 Jun 10.
  9. Ponce CR, Xiao W, Schade PF, Hartmann TS, Kreiman G, and Livingstone MS (2019) Evolving images for visual neurons using a deep generative network reveals coding principles and neuronal preferences, Cell. 2019 May 2;177(4):999-1009
  10. Ponce CR, Lomber SG, Livingstone MS. (2017) Posterior inferotemporal cortex cells use multiple visual pathways for shape encoding. J. Neurosci. May 10;37(19):5019-5034.
  11. Ponce CR, Lomber SG, Born RT. (2008) Integrating motion and depth via parallel pathways. Nat Neurosci. Feb; 11(2):216-23.

See a full list of publications »

 

An explainer video from the Society for Neuroscience meeting on Mueller, Carter, Kansupada and Ponce (2023):

 

One explanation of a common technique in the lab:

 

Some explainer videos from the Society for Neuroscience meeting:

 

An explainer video on our ICLR 2021 paper, by Binxu Wang:

https://iclr.cc/virtual/2021/poster/3366

Binxu Wang explains geometric analysis of Deep Generative Models
Binxu Wang at the International Conference on Learning Representations