The Fixed-Point Implementations for Recurrent Neural Networks

2020 International Conference on System Science and Engineering (ICSSE)(2020)

引用 0|浏览0
暂无评分
摘要
In this paper, the learning efficiency of the single-layer and multiple-layer locally recurrent neural networks (RNN) were investigated. In the RNN structure, piecewise linear activation functions were used. In addition, infinite impulse response digital filter played the role of signal recursions. In RNN implementation, pole-L 2 sensitivity minimization was performed. The weight of every neuron was adjusted by using the back-propagation (BP) learning algorithm. Simulation results show that multilayer RNN may have better learning performance. In addition, the RNN with optimal IIR filter implementation may have better learning performance than that of the canonical-form realization.
更多
查看译文
关键词
pole- sensitivity,fixed-point,multiple layers recurrent neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要