Welcome to my home page! Here you will find information about my research. In a nutshell, I work on mapping neural computations of the brain to silicon microprocessors. By taking inspiration from the structural organization, the dynamical principles and the computational algorithms of brain circuits and networks, I am exploring novel ways to tackle problems in machine learning and robotics.
My work is part of DARPA's SyNAPSE project in collaboration with researches at IBM. To implement scalable brain-like networks in silicon, we have developed a highly efficient neuromorphic chip architecture in nanoscale CMOS technology. The architecture makes use of digital neurons, asynchronous communication circuits and distributed on-chip memory to implement large-scale models of biological neurons and their networks.
This work has received significant press coverage (Science, Nature, Scientific American, MIT Tech Review, Bloomberg, NY Times, Wall Street Journal, EE Times, CNET, Economist, and many others) and has resulted in several publications.
Currently, I am implementing canonical neural computations in our system and I will be using them in robotic vision and robotic chemosensation algorithms.
Besides brain-inspired approaches to machine learning, I am also interested in neuroinformatics (particularly in the structure of brain networks) and in the application of dynamical systems theory and statistical mechanics to the study of neural coding across different spatial and temporal scales.
- Rajit Manohar (Asynchronous Computation, Parallel Distributed Processing, Digital VLSI)
- Thomas Cleland (Olfactory Systems Physiology, Computational Neuroscience)
- Barbara Finlay (Evolutionary Neuroscience)
- David Field (Neural Coding in Vision, Computational Neuroscience)
- Alyosha Molnar (Signal Processing in the Retina, Analog VLSI)
- Kevin Tang (Distributed Synchronization and Optimization of Networks)
- Ashutosh Saxena (Robotics, Computer Vision)
cornell dot edu