Stacked ensemble extreme learning machine coupled with Partial Least Squares-based weighting strategy for nonlinear multivariate calibration.

Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy(2019)

引用 15|浏览9
暂无评分
摘要
With its simple theory and strong implementation, extreme learning machine (ELM) becomes a competitive single hidden layer feed forward networks for nonlinear multivariate calibration in chemometrics. To improve the generalization and robustness of ELM further, stacked generalization is introduced into ELM to construct a modified ELM model called stacked ensemble ELM (SE-ELM). The SE-ELM is to create a set of sub-models by applying ELM repeatedly to different sub-regions of the spectra and then combine the predictions of those sub-models according to a weighting strategy. Three different weighting strategies are explored to implement the proposed SE-ELM, such as the Winner-takes-all (WTA) weighting strategy, the constraint non-negative least squares (CNNLS) weighing strategy and the partial least squares (PLS) weighting strategy. Furthermore, PLS is suggested to be selected as the optimal weighting method that can handle the multi-colinearity among the predictions yielded by all the sub-models. The experimental assessment of the three SE-ELM models with different weighting strategies is carried out on six real spectroscopic datasets and compared with ELM, back-propagation neural network (BPNN) and Radial basis function neural network (RBFNN), statistically tested by the Wilcoxon signed rank test. The obtained experimental results suggest that, in general, all the SE-ELM models are more robust and more accurate than traditional ELM. In particular, the proposed PLS-based weighting strategy is at least statistically not worse than, and frequently better than the other two weighting strategies, BPNN, and RBFNN.
更多
查看译文
关键词
Extreme learning machine (ELM),Partial least squares (PLS),Nonlinear multivariate calibration,Stacked generalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要