Adaptive control of recurrent neural networks using conceptors
CoRR(2024)
摘要
Recurrent Neural Networks excel at predicting and generating complex
high-dimensional temporal patterns. Due to their inherent nonlinear dynamics
and memory, they can learn unbounded temporal dependencies from data. In a
Machine Learning setting, the network's parameters are adapted during a
training phase to match the requirements of a given task/problem increasing its
computational capabilities. After the training, the network parameters are kept
fixed to exploit the learned computations. The static parameters thereby render
the network unadaptive to changing conditions, such as external or internal
perturbation. In this manuscript, we demonstrate how keeping parts of the
network adaptive even after the training enhances its functionality and
robustness. Here, we utilize the conceptor framework and conceptualize an
adaptive control loop analyzing the network's behavior continuously and
adjusting its time-varying internal representation to follow a desired target.
We demonstrate how the added adaptivity of the network supports the
computational functionality in three distinct tasks: interpolation of temporal
patterns, stabilization against partial network degradation, and robustness
against input distortion. Our results highlight the potential of adaptive
networks in machine learning beyond training, enabling them to not only learn
complex patterns but also dynamically adjust to changing environments,
ultimately broadening their applicability.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要