Topology-aware Embedding Memory for Continual Learning on Expanding Networks
arxiv(2024)
摘要
Memory replay based techniques have shown great success for continual
learning with incrementally accumulated Euclidean data. Directly applying them
to continually expanding networks, however, leads to the potential memory
explosion problem due to the need to buffer representative nodes and their
associated topological neighborhood structures. To this end, we systematically
analyze the key challenges in the memory explosion problem, and present a
general framework, i.e., Parameter Decoupled Graph Neural Networks (PDGNNs)
with Topology-aware Embedding Memory (TEM), to tackle this issue. The proposed
framework not only reduces the memory space complexity from 𝒪(nd^L)
to 𝒪(n), but also fully utilizes the topological information for
memory replay. Specifically, PDGNNs decouple trainable parameters from the
computation ego-subnetwork via Topology-aware Embeddings (TEs),
which compress ego-subnetworks into compact vectors (i.e., TEs) to reduce the
memory consumption. Based on this framework, we discover a unique
pseudo-training effect in continual learning on expanding networks
and this effect motivates us to develop a novel coverage maximization
sampling strategy that can enhance the performance with a tight memory
budget. Thorough empirical studies demonstrate that, by tackling the memory
explosion problem and incorporating topological information into memory replay,
PDGNNs with TEM significantly outperform state-of-the-art techniques,
especially in the challenging class-incremental setting.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要