Researchers at the University of Michigan have developed an innovative neural network known as a reservoir computing system that features superior capacity and requires less training time than similar networks. The system could one day provide sophisticated predictions such as foreseeing upcoming words in spoken dialogue.
Formerly produced with bulkier optical parts, the U-M team designed their system using space-efficient memristors, which integrate easily with traditional silicon-based electronics. Memristors are unique electrical components that combine data storage and logic calculations on one chip, in contrast to the classic separation between RAM and CPUs.
Neural networks are comprised of nodes and the connections between them, modeled after a brain’s neurons and synapses. Traditionally, neural networks require a time consuming supervised learning process that teaches the network by feeding it a large set of questions and solutions. This iterative process assigns heavy or light weights to the “synapses” until the right answer is found with minimal error.
“A lot of times, it takes days or months to train a network,” said Wei Lu, leader of the research team and U-M professor of electrical engineering and computer science. “It is very expensive.”
With a reservoir computing system, however, the training process is simplified because its most significant component, the reservoir, does not need to be trained.
“The beauty of reservoir computing is that while we design it, we don’t have to train it,” said Lu.
The reservoir conveys the significant time-related characteristics of a set of input data to a second network in a shortened format. The second network requires only simple training, similar to basic neural networks.
The reservoir computing system was validated by tasking it with recognizing handwritten numerals. Achieving a 91.1 percent accuracy rating, the reservoir outperformed a standard one-layer neural network, which only scored 88 percent. The reservoir computing system was able to accomplish this with only 88 memristors, compared to the rival network's thousands of nodes.
The team also displayed the system’s proficiency in processing data that varies with time by using it to model a complex function with a dependency on many previous results. The reservoir computing system accomplished this task with very little inaccuracy.
In the future, Lu envisions using the system for speech recognition.
“We can make predictions on natural spoken language, so you don’t even have to say the full word,” said Lu. “We could actually predict what you plan to say next.”
In addition, it could be used to clean up noisy signals, such as transmissions from distant radio stations filled with static.
“It could also predict and generate an output signal even if the input stopped,” said Lu.
The work received funding from DARPA’s $6.9 million “Sparse Adaptive Local Learning for Sensing and Analytics” project, which has a goal of creating a computer chip based on self-organizing, adaptive neural networks.