How Neural Architectures Affect Deep Learning for Communication Networks?

ICC 2022 - IEEE International Conference on Communications(2022)

引用 10|浏览49
暂无评分
摘要
In recent years, there has been a surge in applying deep learning to various challenging design problems in communication networks. The early attempts adopt neural architectures inherited from applications such as computer vision, which suffer from poor generalization, scalability, and lack of interpretability. To tackle these issues, domain knowledge has been integrated into the neural architecture design, which achieves near-optimal performance in large-scale networks and generalizes well under different system settings. This paper endeavors to theoretically validate the importance and effects of neural architectures when applying deep learning to communication network design. We prove that by exploiting permutation invariance, a common property in communication networks, graph neural networks (GNNs) converge faster and generalize better than fully connected multi-layer perceptrons (MLPs), especially when the number of nodes (e.g., users, base stations, or antennas) is large. Specifically, we prove that under common assumptions, for a communication network with n nodes, GNNs converge O(n log n) times faster and their generalization error is O(n) times lower, compared with MLPs.
更多
查看译文
关键词
Communication networks,deep learning,graph neural networks,neural tangent kernel
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要