Naive Feature Selection: A Nearly Tight Convex Relaxation for Sparse Naive Bayes

MATHEMATICS OF OPERATIONS RESEARCH(2024)

引用 1|浏览6
暂无评分
摘要
Because of its linear complexity, naive Bayes classification remains an attractive supervised learning method, especially in very large-scale settings. We propose a sparse version of naive Bayes, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem, for which we provide an exact solution in the case of binary data, or a bound in the multinomial case. We prove that our convex relaxation bounds become tight as the marginal contribution of additional features decreases using a priori duality gap bounds derived from the Shapley-Folkman theorem. We show how to produce primal solutions satisfying these bounds. Both binary and multinomial sparse models are solvable in time almost linear in problem size, representing a very small extra relative cost compared with the classical naive Bayes. Numerical experiments on text data show that the naive Bayes feature selection method is as statistically effective as state-ofthe-art feature selection methods such as recursive feature elimination, l1-penalized logistic regression, and LASSO while being orders of magnitude faster (a python implementation can be found at https://github.com/aspremon/NaiveFeatureSelection).
更多
查看译文
关键词
sparsity,nonconvex optimization,Lagrangian duality,Fenchel duality,Shapley-Folkman theorem
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要