So you think you can PLS-DA?
biorxiv(2020)
摘要
Background Partial Least-Squares Discriminant Analysis (PLS-DA) is a popular machine learning tool that is gaining increasing attention as a useful feature selector and classifier. In an effort to understand its strengths and weaknesses, we performed a series of experiments with synthetic data and compared its performance to its close relative from which it was initially invented, namely Principal Component Analysis (PCA).
Results We demonstrate that even though PCA ignores the information regarding the class labels of the samples, this unsupervised tool can be remarkably effective as a feature selector. In some cases, it outperforms PLS-DA, which is made aware of the class labels in its input. Our experiments range from looking at the signal-to-noise ratio in the feature selection task, to considering many practical distributions and models encountered when analyzing bioinformatics and clinical data. Other methods were also evaluated. Finally, we analyzed an interesting data set from 396 vaginal microbiome samples where the ground truth for the feature selection was available. All the 3D figures shown in this paper as well as the supplementary ones can be viewed interactively at
Conclusions Our results highlighted the strengths and weaknesses of PLS-DA in comparison with PCA for different underlying data models.
### Competing Interest Statement
The authors have declared no competing interest.
* PLS-DA
: Partial Least-Squares Discriminant Analysis
PCA
: Principal Component Analysis
CV
: Cross-Validation
PC
: Principal Components
sPLS-DA
: Sparse Partial Least-Squares Discriminant Analysis
tp
: true positives
tn
: true negatives
fp
: false positives
fn
: false negatives
SPCA
: Sparse Principal Component Analysis
ICA
: Independent Component Analysis
RLDA
: Regularized Linear Discriminant Analysis
SVD
: Singular Value Decomposition
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要