Chrome Extension
WeChat Mini Program
Use on ChatGLM

The Influence of Non-learnable Activation Functions on the Positioning Performance of Deep Learning-Based Fingerprinting Models Trained with Small CSI Sample Sizes

Transactions of the Indian National Academy of Engineering(2022)

Cited 0|Views1
No score
Abstract
Activation functions, being mathematical ‘gates’ in between the input feeding the current neuron and its output going to the next layer, is very crucial in the training of deep learning models. They play a big part in determining the output of a model, its accuracy, and computational efficiency. In some cases, activation functions have a major effect on the model’s ability to converge and the convergence speed. To be able to train deep learning based fingerprint positioning models using small CSI sample sizes and have satisfactory positioning results, the choice of appropriate activation functions is very important. In this paper we explore several non-learnable activation functions and conduct a comprehensive analysis to study the influence they have on the positioning performance of deep learning fingerprint-based positioning models using small CSI sample sizes. We then propose a better model training approach with a view of getting the best out of those activation functions.
More
Translated text
Key words
Deep learning, Small sample size, Channel state information, Activation functions, Positioning performance
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined