Sampling And Output Estimation In Distributed Algorithms And Lcas

PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING AND NETWORKING (ICDCN '21)(2021)

引用 0|浏览0
暂无评分
摘要
We consider the distributed message-passing model and the Local Computational Algorithms (LCA) model. In both models a network is represented by an n-vertex graph G = (V, E). We focus on labeling problems, such as vertex-coloring, edge-coloring, maximal independent set (MIS) and maximal matching. In the distributed model the vertices of v perform computations in parallel, in order to compute their parts in the solution for G. In the LCA model, on the other hand, probes are performed on certain vertices in order to compute their labels in a solution to a given problem. We study the possibility of estimating a solution produced by an algorithm, much before the algorithm terminates. This estimation not only allows for size estimation of a solution, but also for an early detection of failure in randomized algorithms, so that a correcting procedure can be executed. To this end, we propose a sampling technique, in which the labels in the sampling are distributed proportionally to the distribution in the algorithm's output. However, the sampling running time is significantly smaller than that of the algorithm in hand.We achieve the following results, in terms of the maximum degree Delta and the arboricity a of the input graph. The running time of our procedures is O(log a + log log n), for sampling vertex-coloring, edge-coloring, maximal matching and MIS. This significantly improves upon previous sampling techniques, which incur additional dependency on the maximum degree Delta that can be much higher than the arboricity, as well as more significant dependency on n. Our techniques for sampling in the distributed model provide a powerful and general tool for estimation in the LCA model. In this setting the goal is estimating the size of a solution to a given problem, by making as few vertex probes as possible. For the above-mentioned problems, we achieve estimations with probe complexity d(O(log a+log log n)), where d = min(Delta, a . poly(log(n)).
更多
查看译文
关键词
Distributed Algorithms, Randomized Scheme, Graph Coloring, Maximal Independent Set, LCA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要