Multilevel neuronal architecture to resolve classification problems with large training sets: Parallelization of the training process

Journal of Computational Science(2016)

引用 4|浏览4
暂无评分
摘要
The value of radial base function networks (RBF) has been fully demonstrated and their application in a wide number of scientific fields is undisputed. A fundamental aspect of this tool focusses on the training process, which determines both the efficiency (success or “hit rat” in the subsequent classification) and the overall performance (runtime), since the RBF training phase is the most expensive phase in terms of time.There is abundant literature on studies to improve these aspects, in which all the training techniques proposed are classified either as iterative techniques, with very short execution times for the training process, or as the traditional exact techniques, which excel in their high rates of accuracy in the classification.In our field of study we require the smallest error possible in the classification process, and for this reason, our research opts for exact techniques, while we also work to improve the high latencies in the training process.In a previous study, we proposed a pseudo-exact technique with which we improved the training process by an average of 99.1638177% using RBF-SOM architecture. In the present study we exploit one characteristic of this architecture, namely the possibility of parallelization of the training process.Accordingly, our article proposes a RBF-SOM structure which, thanks to CUDA, parallelizes the training process. This we will denote as CUDA-RBF-SOM architecture.
更多
查看译文
关键词
Artificial neural networks,Radial basis function networks,Multilevel neural networks,CUDA,Parallelization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要