A Robust Growing Memory Network for Lifelong Learning of Intelligent Agents

2022 International Joint Conference on Neural Networks (IJCNN)(2022)

引用 0|浏览33
暂无评分
摘要
The general success criterion for an artificial intelligence system is its ability to mimic human brain learning. Throughout a lifetime, the human brain is capable of continual learning. The acquired information is kept, augmented, finetuned, and utilized to complete new tasks in the future. At the moment, machine learning models perform well when given precisely structured, balanced, and homogenized data. However, when several jobs with incremental data are provided, the performance of the majority of these models suffers. Inspired by the Complementary Learning Systems (CLS) theory in neuroscience, episodic-semantic memory-based frameworks have received much attention and research. On the other hand, conventional methods are needed to perform data batch normalization and are sensitive to vigilance hyperparameters across different datasets. This paper proposes a Robust Growing Memory Network (RGMN) that continuously learns incoming data without normalization and is unlikely to be affected by the vigilance hyperparameter. The RGMN is a self-organizing topological network that models human episodic memory, and its network size can grow and shrink in response to data. The long-term memory buffer retains the largest and smallest data values that will use for learning. To evaluate the performance of the proposed method, we conducted comparative experiments on real-world datasets, and results showed that the proposed method outperforms existing memory-based baseline frameworks in terms of accuracy.
更多
查看译文
关键词
lifelong learning,self-organizing,episodic memory,incremental learning,topological map
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要