A Time-Space Lower Bound for a Large Class of Learning Problems
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)(2017)
摘要
We prove a general memory-samples lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. A matrix M : A × X → {-1, 1} corresponds to the following learning problem: An unknown element x ∈ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a
1
, b
1
), (a
2
, b
2
) ..., where for every i, a
i
∈ A is chosen uniformly at random and b
i
= M(ai, x). Let σ
max
be the largest singular value of M and note that always σ
max
≤ |A|
1/2
· |X|
1/2
. We show that if σ
max
≤ |A|
1/2
· |X|
1/2-ε
, then any learning algorithm for the corresponding learning problem requires either a memory of size at least Ω ((εn)
2
) or at least 2
Ω(εn)
samples, where n = log
2
|X|. As a special case, this gives a new proof for the memory-samples lower bound for parity learning [14].
更多查看译文
关键词
time-space lower bound,general memory-samples,learning algorithm,parity learning,exponential number,memory-samples lower bound,quadratic size
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络