Permutation Invariant Individual Batch Learning.

ITW(2023)

引用 0|浏览4
暂无评分
摘要
This paper considers the individual batch learning problem. Batch learning (in contrast to online) refers to the case where there is a "batch" of training data and the goal is to predict a test outcome. Individual learning refers to the case where the data (training and test) is arbitrary, individual. This batch individual setting poses a fundamental issue of defining a plausible criterion for a universal learner since in each experiment there is a single test sample. We propose a permutation invariant criterion that, intuitively, lets the individual training sequence manifest its empirical structure for predicting the test sample. This criterion is essentially a min-max regret, where the regret is based on a leave-one-out approach, minimized over the universal learner and maximized over the outcome sequences (thus agnostic). To show its plausibility, we analyze the criterion and its resulting learner for two cases: Binary Bernoulli and 1-D deterministic barrier. For both cases the regret behaves as O(c/N), N the size of the training and c = 1 for the Bernoulli case and log 4 for the 1-D barrier. Interestingly, in the Bernoulli case, the regret in the stochastic setting behaves as O(1/2N) while here, in the individual setting, it has a larger constant.
更多
查看译文
关键词
1-D deterministic barrier,Bernoulli case,individual training sequence,leave-one-out approach,min-max regret,permutation invariant individual batch learning,plausible criterion,single test sample,stochastic setting,universal learner
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要