Advertisement

Neuromorphic computing hub at University of Texas at San Antonio to be largest in U.S.

The Neuromorphic Commons, or THOR, project, intends to provide researchers from many disciplines better access to computing architectures that take inspiration from the human brain.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
a yellow brain
(Getty Images)

The University of Texas at San Antonio on Tuesday announced it’s received $4 million from the National Science Foundation to establish a specialized computing system that researchers said will be larger than any of its kind in the United States.

Researchers from the university’s Matrix AI Consortium for Human Well-Being will use the funding to develop The Neuromorphic Commons, or THOR, project. Neuromorphic computing is an approach to designing computers that mimic or take inspiration from the human brain’s capacity to solve complex problems on the fly using relatively little energy.

Dhireesha Kudithipudi, an electrical engineering and computer science professor at the university who’s serving as the project’s principal investigator, told EdScoop the goal of the THOR project is to provide broad access to large-scale neuromorphic systems to U.S. researchers from a variety of disciplines, including computational neuroscience, life sciences, artificial intelligence, machine learning and physics.

“The idea is that by providing access to an infrastructure like that, we hope that it enables researchers to have a richer understanding of the computational models on neuromorphic or neuro-inspired machine-learning or AI algorithms,” Kudithipudi said. “Better understanding of neuromorphic hardware also provides them a framework to benchmark their models across different platforms.”

Advertisement

Kudithipudi, whose work has centered on energy-efficient computing, and who founded the Matrix lab, explained that neuromorphic systems are well-suited for applications that require rapid reaction times and when hardware is constrained by size, weight or power supply. She pointed out that the human brain — easily the most sophisticated object in the known universe — runs on about 20 watts of power, just enough to power a lightbulb.

“A lot of experiments in this domain have been limited to smaller datasets or smaller model sizes,” she said. “By providing access to this large infrastructure, people can look at scale how their models or how their systems can work or what energy benefit they can [get] compared to the machine-learning systems.”

She said many prominent computer scientists and engineers have gestured toward the human brain as the next frontier for computing. Among these was John von Neumann, a mathematician and scientist who developed many of the foundational ideas used in game theory and modern computing.

“We started looking at a lot of the literature, going back to the 1800s, and … von Neumann himself, who’s considered the father of computing, he’s an amazing collaborator. He knew how to bring people together and work across disciplines. In his last memoir he said if you really want to build these robust systems, you want to look at [the] brain as a source of inspiration. And this [idea] has come from so many researchers, across decades.”

Latest Podcasts