An optimal tradeoff between entanglement and copy complexity for state tomography
CoRR(2024)
摘要
There has been significant interest in understanding how practical
constraints on contemporary quantum devices impact the complexity of quantum
learning. For the classic question of tomography, recent work tightly
characterized the copy complexity for any protocol that can only measure one
copy of the unknown state at a time, showing it is polynomially worse than if
one can make fully-entangled measurements. While we now have a fairly complete
picture of the rates for such tasks in the near-term and fault-tolerant
regimes, it remains poorly understood what the landscape in between looks like.
In this work, we study tomography in the natural setting where one can make
measurements of t copies at a time. For sufficiently small ϵ, we
show that for any t ≤ d^2,
Θ(d^3/√(t)ϵ^2) copies are necessary and
sufficient to learn an unknown d-dimensional state ρ to trace distance
ϵ. This gives a smooth and optimal interpolation between the known
rates for single-copy and fully-entangled measurements.
To our knowledge, this is the first smooth entanglement-copy tradeoff known
for any quantum learning task, and for tomography, no intermediate point on
this curve was known, even at t = 2. An important obstacle is that unlike the
optimal single-copy protocol, the optimal fully-entangled protocol is
inherently biased and thus precludes naive batching approaches. Instead, we
devise a novel two-stage procedure that uses Keyl's algorithm to refine a crude
estimate for ρ based on single-copy measurements. A key insight is to use
Schur-Weyl sampling not to estimate the spectrum of ρ, but to estimate the
deviation of ρ from the maximally mixed state. When ρ is far from the
maximally mixed state, we devise a novel quantum splitting procedure that
reduces to the case where ρ is close to maximally mixed.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要