GNN Model for Time-Varying Matrix Inversion With Robust Finite-Time Convergence

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS(2024)

引用 14|浏览30
暂无评分
摘要
As a type of recurrent neural networks (RNNs) modeled as dynamic systems, the gradient neural network (GNN) is recognized as an effective method for static matrix inversion with exponential convergence. However, when it comes to time-varying matrix inversion, most of the traditional GNNs can only track the corresponding time-varying solution with a residual error, and the performance becomes worse when there are noises. Currently, zeroing neural networks (ZNNs) take a dominant role in time-varying matrix inversion, but ZNN models are more complex than GNN models, require knowing the explicit formula of the time-derivative of the matrix, and intrinsically cannot avoid the inversion operation in its realization in digital computers. In this article, we propose a unified GNN model for handling both static matrix inversion and time-varying matrix inversion with finite-time convergence and a simpler structure. Our theoretical analysis shows that, under mild conditions, the proposed model bears finite-time convergence for time-varying matrix inversion, regardless of the existence of bounded noises. Simulation comparisons with existing GNN models and ZNN models dedicated to time-varying matrix inversion demonstrate the advantages of the proposed GNN model in terms of convergence speed and robustness to noises.
更多
查看译文
关键词
Convergence,Mathematical models,Computational modeling,Adaptation models,Biological neural networks,Time-varying systems,Recurrent neural networks,Finite-time convergence,gradient neural network (GNN),robustness,time-varying matrix inversion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要