From average case complexity to improper learning complexity

STOC(2014)

引用 109|浏览85
暂无评分
摘要
The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are effficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of improper learning (a.k.a. representation independent learning). The difficulty in proving lower bounds for improper learning is that the standard reductions from NP-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in [21] and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption [13] about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, • Learning DNF's is hard. • Agnostically learning halfspaces with a constant approximation ratio is hard. • Learning an intersection of ω(1) halfspaces is hard.
更多
查看译文
关键词
csp problems,resolution lower bounds,average case complexity,halfspaces,general,hardness of improper learning,dnfs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要