Joe Bates (Singular Computing)

Title: Practical Approximate Computing

Abstract: Approximate computers can be built as traditional digital machines where only the arithmetic hardware is modified. For many interesting tasks, software can efficiently drive error levels far below what the hardware provides, if needed. We show examples of this in vision, image processing, radar, speech, and deep learning, done in collaboration with MIT, CMU, Sandia, BAE, and others. We discuss approximate computing as an enabler for embedded systems and practical billion core machines.

 

 

Steve Furber (University of Manchester)

Title: SpiNNaker update

Abstract: The SpiNNaker project, now offered as one of two neuromorphic platforms supported bythe European Union ICT Flagship Human Brain Project, is a digital many-core computer incorporating a million mobile phone processors optimised for real-time brain-modelling research applications. In this talk I will give an overview of the platform and describe some of the latest research results that have been generated using it.

 

 

Pentti Kanerva (UC Berkeley)

Title: Computing with Hypervectors

Abstract: Hypervectors are high-dimensional (e.g., D = 10,000), (pseudo)random,with independent identically distributed (i.i.d.) components. Computing with hypervectors is an alternative to conventional (von Neumann) computing with Booleans and numbers, and to neural nets and deep learning trained with gradient descent (error back-propagation). At the core is an algebra of operations on vectors, resembling the algebra of numbers that makes computing with numbers so useful.  New representations are computed from existing ones very fast compared to arriving at them through gradient descent, and the algebra allows
composed vectors to be factored into their constituents.  Computing with hypervectors resembles traditional neural nets in its reliance on distributed representation, making it tolerant of noise and component failure.  It fills the gap between traditional and neural-net computing, and the architecture is ideal for realization in nanoelectronics.

 

 

Amir Khosrowshahi (Nervana)

Title: Rethinking computation: a processor architecture for machine intelligence

Abstract: Computers are increasingly used to process and understand data. New approaches are required as current technology reaches fundamental limits. Deep learning is a branch of machine learning that has recently achieved state-of-the-art in a wide range of domains including images, speech, and text. Nervana is a startup developing a processor architecture for deep learning. By integrating the necessary distributed computational primitives at a low level into a new processor design, we are able to outperform current technology such as CPUs and GPUs by a large margin in speed, scaling, and efficiency.

 

 

Aurel Lazar (Columbia)

Title: NeuroInformation Processing Machines

Abstract: In recent years we have achieved substantial amount of progress in formal/rigorous models of neural computing engines, and spike and phase processing machines. Key advances in neural encoding with Time Encoding Machines (TEMs) and functional identification of neural circuits with Channel Identification Machines will be reviewed. The Duality between TEMs and CIMs will be discussed. We show that via simple connectivity changes, Spike Processing Machines (SPMs) enable rotations, translations and scaling of the input visual field. SPMs for mixing/demixing of auditory and visual fields will also be demonstrated. Finally, a motion detection algorithm for natural scenes using local phase information will briefly be discussed.

References:

A. A. Lazar, N. H. Ukani, and Y. Zhou, A Motion Detection Algorithm Using Local Phase Information, Computational Intelligence and Neuroscience, Volume 2016, January 2016.

A. A. Lazar, Y. B. Slutskiy, and Y. Zhou, Massively Parallel Neural Circuits for Stereoscopic Color Vision: Encoding, Decoding and Identification, Neural Networks, Volume 63, pp. 254-271, March 2015.

A. A. Lazar and Y. B. Slutskiy, Spiking Neural Circuits with Dendritic Stimulus Processors: Encoding, Decoding, and Identification in Reproducing Kernel Hilbert Spaces, Journal of Computational Neuroscience, Volume 38, Number 1, pp. 1-24, February 2015.

A. A. Lazar and Y. Zhou, Reconstructing Natural Visual Scenes from Spike Times, Proceedings of the IEEE, Volume 102, Number 10, pp. 1500-1519, October 2014.

A. A. Lazar and Y. Zhou, Volterra Dendritic Stimulus Processors and Biophysical Spike Generators with Intrinsic Noise Sources, Frontiers in Computational Neuroscience, Volume 8, Number 95, pp. 1-24, Sept. 2014.

A. A. Lazar and Y. B. Slutskiy, Channel Identification Machines for Multidimensional Receptive Fields, Frontiers in Computational Neuroscience, Volume 8, Number 117, September 2014.

A. A. Lazar and Y. B. Slutskiy, Functional Identification of Spike-Processing Neural Circuits, Neural Computation, Volume 26, Number 2, MIT Press, pp. 264-305, February 2014.

 

 

Wolfgang Maass (Technische Universitat Graz)

Title: Principles of network optimization through STDP and rewiring

Abstract: I will first discuss briefly how Expectation Maximization (EM) can help us to understand the evolution of networks under STDP for some interesting network architectures (Pecevski et al., 2016). I will then discuss how one can understand the combined stochastic dynamics of rewiring (modelling spine dynamics in brain networks) and STDP in networks of spiking neurons from a principled perspective (see (Kappel et al.,2015) for a first publication on this approach).

References:

D. Kappel, S. Habenschuss, R. Legenstein, and W. Maass. Network plasticity as Bayesian inference. PLOS Computational Biology, 2015

D. Pecevski and W. Maass. Learning probabilistic inference through STDP, 2016 (under review)

 

 

Dharmendra Modha (IBM)

Title: TrueNorth: Recent Advances in Technology and Ecosystem

Abstract: IBM has developed end-to-end technology and ecosystem to create and program energy-efficient, brain-inspired machines that mimic the brain’s abilities for perception, action, and cognition. The ecosystem consists of an indexcard-sized board with 1 million neurons; a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement. The ecosystem will be demonstrated on a number of datasets with near state-of-the-art accuracy and unprecedented energy-efficiency. Today, 100+ researchers at 30+ universities, government agencies, and companies are exploring the ecosystem. IBM has developed new scale-up and scale-out systems that will be presented.

 

 

Bruno Olshausen (UC Berkeley, Redwood Center for Theoretical Neuroscience)

Title: Beyond inspiration: Three lessons from biology on building intelligent machines

Abstract: The only known systems that exhibit truly intelligent, autonomous behavior are biological. If we wish to build machines that emulate such behavior, then it makes sense to learn as much as we can about how these systems work. Inspiration is a good starting point, but real progress will require gaining a more solid understanding of the principles of information processing at work in nervous systems. Here I will focus on three areas of investigation that I believe will be especially fruitful: 1) the study of perception and cognition in tiny nervous systems such as wasps and jumping spiders, 2) developing good computational models of nonlinear signal integration in dendritic trees, and 3) elucidating the computational role of feedback in neural systems.

 

 

Alice C. Parker (Electrical Engineering Department University of Southern California)

Title: Tradeoffs in Neuromorphic Circuit Design: Reliability, Efficiency, Density of Computations, Power, and Biomimicity

Abstract: From the perceptron to today’s neuromorphic circuits, researchers have made a broad range of tradeoffs in implementation, resulting in different styles of neuromorphic computation. Biological neurons are complex systems, performing non-­‐linear computations over time and space, with redundancy and quasi-­‐redundancy contributing to reliability. Chemical signaling enables and disables mechanisms that move charged particles in and out of the neural cell body, with DNA controlling synapse location and quantity, as well as connections to other neurons. The major choices in neuromorphic circuits include analog vs. digital implementation, extent of redundancy, linear vs. nonlinear neural computations, complexity of each neuron vs. complexity of the entire neural network, and the degree of biomimicity (how much neural complexity is included in the neuromorphic circuits and how closely the voltage levels and timing match biological neurons). The role of astrocytes and neurohormones is discussed, along with neural proximity to electromagnetic fields produced by neighboring neurons. This presentation discusses these tradeoffs by means of example circuits taken from the speaker’s research and from the literature. Final comments will discuss ethics and consciousness and their role in neuromorphic systems.