Jean Anne Incorvia literally works at the interface between software and hardware. She is a physicist, materials scientist and electrical engineer simultaneously. An assistant professor in the Cockrell School’s Department of Electrical and Computer Engineering, her key interests lie in developing the computer systems of the future – computers powered and operated through nanotechnology, quantum mechanics and, most recently, the emerging field of neuromorphic computing.

Jean Anne Incorvia smiling at her desk

Incorvia’s innovation in neuromorphic computer research helped her earn a National Science Foundation Faculty Early Career Development Award, a prestigious honor bestowed on junior faculty that includes up to five years of research funding. And she is leading a neuromorphic computing research team with Sandia National Laboratories’ Laboratory Directed Research & Development program.

We sat down with Incorvia to learn more about what exactly neuromorphic computing is and why she and other researchers are fascinated by mimicking the human brain to develop new computing technologies.

What is neuromorphic computing?

Neuromorphic computing takes inspiration from the human brain and how it computes data. It is also referred to as brain-inspired computing. In the brain, neurons emit electrical signals based on input signals, and synapses (junctions or gaps between nerve cells, where basic electrical impulses are transmitted from one cell to another) provide connectivity between neurons. External stimuli cause changes to the synapses, modifying the connectivity between neurons. In neuromorphic computing, we want to design computers that operate in a similar way using artificial neurons and synapses.

We build computers because they can do things the human brain cannot. So why build a computer that mimics it?

Modern computers are very, very good at certain tasks. For example, a computer can calculate 329,484,853 + 792,039,254 a lot faster than a human. But, there are some tasks where the human brain wins. For example, the brain can recognize a face or voice using a million times less power than a modern supercomputer. How exactly the brain works is still an active area of research, but we do know that its efficient computation of these types of tasks comes from both how the building blocks of the brain (neurons and synapses) behave, and how they are connected together in a much different way than the computers of today.

One bottleneck we see when modern computers do data-intensive tasks is called the Memory Wall. In traditional computers, the processing and memory are physically located in different parts of the computer. Access to memory ends up dominating the energy required to do a task. For example, in genome sequencing, 96% of the computing time is due to memory access.

Instead, the same objects in the brain, namely neurons and their synaptic connections, do both processing and memory (storage of information). Thus, the starting point of neuromorphic computing is to have densely co-localized memory and logic and where computing can be done in a parallel rather than a serial manner. Restructuring our computers in this way has already led to a revolution in computing that is seeing many applications in machine learning and AI.

What is holding existing computer technology back from being able to more closely mimic the brain?

Recent advances in computing have mainly come through restructuring components, but we are still using silicon as the building blocks of these components. While still the best option for traditional computing, it may not be the best material for neuromorphic computing. Neuroscience suggests the way synapses and neurons connect together is only one part of what makes them so efficient. Their physical properties and the interaction these properties have with each other are also key to computing in the brain.

Your team in the Integrated Nano Computing Lab at UT Austin recently made an important discovery related to this challenge. Tell us more.

My research group is using nanomagnetic devices instead of silicon to construct artificial neurons and synapses. Magnetic materials and devices have many properties similar to the brain that make them attractive candidates for building neuromorphic computers. We have found a new way to take advantage of some of those physical properties to better mimic synapses, which are crucial to improving energy-efficiency in the computers of the future.

You work on technology that could take decades before we see it used in our everyday lives. Why?

This is an exciting time in the computing industry, as we move beyond silicon to explore how new physics and new materials could lead to the next revolution in computing, both in classical computing and quantum computing. While this exact technology won’t be ready tomorrow, I am excited to be a part of this movement to shape what computers look like in 50 years.

I love pushing from the bottom upward to bring physics concepts into engineering and circuits. It is a lot of fun being at the intersection of physics, materials science and electrical engineering. A new material behavior is discovered in a lab, but it requires someone to make a device out of it, then to put those devices into a circuit, then to understand how that circuit would behave in a system, how it would be powered and so on. Only then can we have an idea of how useful that material property could be. This requires working with collaborators across the stack, from physicists to computer architects.

The great thing about nanoelectronics research is it requires both technical expertise and creativity. And the results can impact society in so many positive ways.