Researchers from the University of Massetchuttes Amherst and Baylor College of Medicine was inspired by the brain’s ability to replay memories to fix a long-standing obstacle in artificial intelligence (AI) algorithm development.
Catastrophic forgetting has held back AI algorithms for a long time. AI algorithms have been unable to learn new information without storing massive amounts of data. Replay could prevent catastrophic forgetting when an AI algorithm is learning.
Humans can constantly accumulate information while living, learning new information and building on earlier lessons. This mechanism protects memories against forgetting using the replay of neuronal activity patterns. Until now, robots have not been able to do this.
One solution was for algorithms to store previously learned material and revisit in future learning. This does solve catastrophic forgetting, but it’s extremely inefficient. The amount of data that would need to be stored would quickly become unmanageable.
The team found that recognizing a replay in the brain doesn’t require data storage. Using replay, a neural network generates high-level representations of what it has seen before.
Researchers developed abstract generative brain replay for AI algorithms. This method proved that replaying a few generated representations enable an algorithm to remember old memories while creating new ones. This method prevents catastrophic forgetting and provides a new and streamlined path for system learning. Abstract generative brain replay allows the system to generalize learning from one situation to another.
The method is a high performing and continual learning benchmark without storing data.
A paper on the new method was published in Nature Communications.