Lossy Kernelization

STOC '17: Symposium on Theory of Computing Montreal Canada June, 2017(2016)

引用 101|浏览55
暂无评分
摘要
In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α-approximate kernel. Loosely speaking, a polynomial size α-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k) to a parameterized problem, and outputs another instance (I',k') to the same problem, such that |I'|+k' ≤ k^O(1). Additionally, for every c ≥ 1, a c-approximate solution s' to the pre-processed instance (I',k') can be turned in polynomial time into a (c ·α)-approximate solution s to the original instance (I,k). Our main technical contribution are α-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP ⊆ coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α-approximate kernel of polynomial size, for any α≥ 1, unless NP ⊆ coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要