Several researchers from the Redwood Center for Theoretical Neuroscience (RCTN) at UC Berkeley, which is part of the Helen Wills Neuroscience Institute (HWNI), have recently won awards and published a paper together.
HWNI faculty member Fritz Sommer, postdoctoral fellow Connor Bybee, and former postdoctoral fellows Paxon Frady and Denis Kleyko — all from the RCTN or former members — were selected as finalists for the 2023 Bell Labs Prize based on a proposal titled Next-Generation Computing Leveraging Brain Rhythms, Dendrites, and Spikes. The proposal aims at building an energy-efficient, scalable neuromorphic computing framework. The proposed framework for distributed computing leverages network communication with spikes and rhythms, as well as local computations going beyond traditional artificial neurons, including spiking and dendritic nonlinearities. Building on earlier models in cognitive neuroscience, the framework provides a computing algebra, making the computing with distributed representations explainable, programmable, and analyzable.
Bybee recently earned his PhD in the Sommer lab, and is now a postdoc working with HWNI faculty member Bruno Olshausen, who is also director of the RCTN and a professor in the School of Optometry. Bybee recently won a Swartz Postdoctoral Fellowship from the Swartz Foundation, which provides two to three years of support for research in theoretical neuroscience. Frady and Kleyko were postdocs in the Sommer and Olshausen labs.
Bybee, Kleyko, Olshausen, and Sommer, along with collaborators Amir Khosrowshahi (Intel, visiting scholar at the RCTN, and Berkeley PhD alum) and Dmitri Nikonov (Intel) also published a paper in Nature Communications titled “Efficient optimization with higher-order ising machines” (September 27, 2023). Read about the paper in this summary from the authors:
“Inspired by the ubiquitous observation of rhythmic activity in the brain, the study investigates the potential roles of rhythms in efficient parallel computing. Combinatorial optimization is computationally hard and hence it has been investigated how it can be accelerated by parallel computing. In a so-called Ising network, the constraints of an optimization problem are mapped to interactions between variable nodes, the network dynamics then solves the problem in parallel by relaxing to a low-energy state. Ising machines are hardware realizations of Ising models, some of which include oscillators.
Most optimization problems contain constraints that correspond to higher-order interactions (between larger sets of variables) but most hardware is limited to second-order interactions (between pairs of variables). Thus, the common strategy is to map a given problem to a larger Ising network with more nodes than problem variables but containing only second-order interactions. The authors realized that dendritic nonlinearities in naturalistic neurons enable higher-order interactions directly. Compared to second-order oscillator networks, it is shown that higher-order oscillator networks can be significantly more resource-efficient and provide superior solutions for constraint satisfaction problems.”