Non-Uniform Bounds in the Random-Permutation, Ideal-Cipher, and Generic-Group Models.

ADVANCES IN CRYPTOLOGY - CRYPTO 2018, PT I(2018)

引用 46|浏览100
暂无评分
摘要
The random-permutation model (RPM) and the ideal-cipher model (ICM) are idealized models that offer a simple and intuitive way to assess the conjectured standard-model security of many important symmetric-key and hash-function constructions. Similarly, the generic-group model (GGM) captures generic algorithms against assumptions in cyclic groups by modeling encodings of group elements as random injections and allows to derive simple bounds on the advantage of such algorithms. Unfortunately, both well-known attacks, e.g., based on rainbow tables (Hellman, IEEE Transactions on Information Theory '80), and more recent ones, e.g., against the discrete-logarithm problem (Corrigan-Gibbs and Kogan, EUROCRYPT '18), suggest that the concrete security bounds one obtains from such idealized proofs are often completely inaccurate if one considers non-uniform or preprocessing attacks in the standard model. To remedy this situation, this work defines the auxiliary-input (AI) RPM/ICM/GGM, which capture both non-uniform and preprocessing attacks by allowing an attacker to leak an arbitrary (bounded-output) function of the oracle's function table; derives the first non-uniform bounds for a number of important practical applications in the AI-RPM/ICM, including constructions based on the Merkle-Damgard and sponge paradigms, which underly the SHA hashing standards, and for AI-RPM/ICM applications with computational security; and using simpler proofs, recovers the AI-GGM security bounds obtained by Corrigan-Gibbs and Kogan against preprocessing attackers, for a number of assumptions related to cyclic groups, such as discrete logarithms and Diffie-Hellman problems, and provides new bounds for two assumptions. An important step in obtaining these results is to port the tools used in recent work by Coretti et al. (EUROCRYPT '18) from the ROM to the RPM/ICM/GGM, resulting in very powerful and easy-to-use tools for proving security bounds against non-uniform and preprocessing attacks.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要