谷歌浏览器插件
订阅小程序
在清言上使用

Transductive Sample Complexities Are Compact

CoRR(2024)

引用 0|浏览12
暂无评分
摘要
We demonstrate a compactness result holding broadly across supervised learning with a general class of loss functions: Any hypothesis class H is learnable with transductive sample complexity m precisely when all of its finite projections are learnable with sample complexity m. We prove that this exact form of compactness holds for realizable and agnostic learning with respect to any proper metric loss function (e.g., any norm on ℝ^d) and any continuous loss on a compact space (e.g., cross-entropy, squared loss). For realizable learning with improper metric losses, we show that exact compactness of sample complexity can fail, and provide matching upper and lower bounds of a factor of 2 on the extent to which such sample complexities can differ. We conjecture that larger gaps are possible for the agnostic case. Furthermore, invoking the equivalence between sample complexities in the PAC and transductive models (up to lower order factors, in the realizable case) permits us to directly port our results to the PAC model, revealing an almost-exact form of compactness holding broadly in PAC learning.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要