Low-rank and sparse embedding for dimensionality reduction.

Neural Networks(2018)

引用 24|浏览70
暂无评分
摘要
In this paper, we propose a robust subspace learning (SL) framework for dimensionality reduction which further extends the existing SL methods to a low-rank and sparse embedding (LRSE) framework from three aspects: overall optimum, robustness and generalization. Owing to the uses of low-rank and sparse constraints, both the global subspaces and local geometric structures of data are captured by the reconstruction coefficient matrix and at the same time the low-dimensional embedding of data are enforced to respect the low-rankness and sparsity. In this way, the reconstruction coefficient matrix learning and SL are jointly performed, which can guarantee an overall optimum. Moreover, we adopt a sparse matrix to model the noise which makes LRSE robust to the different types of noise. The combination of global subspaces and local geometric structures brings better generalization for LRSE than related methods, i.e., LRSE performs better than conventional SL methods in unsupervised and supervised scenarios, particularly in unsupervised scenario the improvement of classification accuracy is considerable. Seven specific SL methods including unsupervised and supervised methods can be derived from the proposed framework and the experiments on different data sets (including corrupted data) demonstrate the superiority of these methods over the existing, well-established SL methods. Further, we exploit experiments to provide some new insights for SL.
更多
查看译文
关键词
Dimensionality reduction,Subspace learning,Robustness,Overall optimum
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要