Principal Component Analysis On Graph-Hessian

2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019)(2019)

引用 3|浏览96
暂无评分
摘要
Principal Component Analysis (PCA) is a widely used linear dimensionality reduction method, which assumes that the data are drawn from a low-dimensional affine subspace of a high-dimensional space. However, it only uses the feature information of the samples. By exploiting structural information of data and embedding it into the PCA framework, the local positional relationship between samples in the original space can be preserved, so that the performance of downstream tasks based on PCA can be improved. In this paper, we introduce Hessian regularization into PCA and propose a new model called Graph-Hessian Principal Component Analysis (GHPCA). Hessian can correctly use the intrinsic local geometry of the data manifold. It is better able to maintain the neighborhood relationship between data in high-dimensional space. Compared with other Laplacian-based models, our model can obtain more abundant structural information after dimensionality reduction, and it can better restore low-dimensional structures. By comparing with several methods of PCA, GLPCA, RPCA and RPCAG, through the K-means clustering experiments on USPS handwritten digital dataset, YALE face dataset and COIL20 object image dataset, it is proved that our models are superior to other principal component analysis models in clustering tasks.
更多
查看译文
关键词
dimensionality reduction, principal component analysis, manifold learning, graph, hessian regularization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要