The biological brain implements massive parallel computations using an architecture that is fundamentally different from that of today's computers. Even from a rudimentary comparison there are some obvious differences. The vast majority of silicon microprocessors are organized with a global clock whereas brain circuits operate in an asynchronous, event-driven fashion. Instructions in present-day computer architectures are executed sequentially whereas neural computations are massively parallel and distributed. There is a clear separation between processor and memory in microprocessors, whereas memory and computation are tightly integrated at the device (cellular) level in biology. A silicon logic gate communicates with only a few of its neighbors while each neuron may talk to thousands of others.
Current microprocessors can crunch numbers seamlessly, but they are terrible at tasks that brains do with ease. What is it about the architecture of neural systems that make them excel in pattern recognition, sensory integration, and cognitive tasks? How can we use insights from these systems to design better computing machines? Designing neuromorphic chips will help us investigate these questions.
Another motivation behind designing these chips is to build a platform for running large-scale, real-time simulations to aid neuroscience research. Studying a system as complex as the brain invariably leads to analytically intractible circuits or time-consuming, surgically-invasive laboratory experiments. VLSI implementations of biologically-accurate neural network models can result in powerful tools for experimentation, where theoretical insights can be put to the test through large-scale simulations. Hardware acceleration will allow researchers to run real-time simulations beyond scales possible in supercomputers, and at a fraction of the cost (once commercialized) and power consumption.
Some areas I'm investigating are :
- Digital vs Analog Silicon Neurons: Analog circuits closely mimic the continuous time operation of neurons but are noisy and imprecise, making it difficult to make them correspond to mathematical models of neurons. In addition, they do not scale well to the latest fabrication technologies. Digital circuits reliably approximate neural operations. This is sufficient for the discrete-time simulations used in computational neuroscience research. Which representation is best for a given application?
- Communication circuits for neuromorphic systems: Address-Event Representations, Spike-based routing, Chip-to-chip communication.
- Reconfigurable neuromorphic architectures: Chips where networks are not hardwired but set up manually by the user.
- Learning circuits through spike-timing-dependent plasticity.
- What level of detail is useful for computational research? Are the rich dynamics in Hodgkin-Huxley models necessary? Or do reduced models help computational tractability? How should a synapse be modeled? Binary switches? Exponential rises and decays? Neurotransmitter diffusion processes? Is molecular-level analysis required for certain phenomena (e.g. long-term memory)?
- Models of the retina in a configurable chip
- Models of the olfactory bulb in a configurable chip
- Dendritic Trees: A good amount of processing may be carried out in the dendritic branches of neurons. How should these computations be modeled and what are efficient circuits for them?
- Simulation tradeoffs: Custom circuits vs FPGAs vs GPUs vs Parallel CPU clusters
cornell dot edu