Unclonable photonic keys hardened against machine learning attacks

APL PHOTONICS(2020)

引用 20|浏览42
暂无评分
摘要
The hallmark of the information age is the ease with which information is stored, accessed, and shared throughout the globe. This is enabled, in large part, by the simplicity of duplicating digital information without error. Unfortunately, an ever-growing consequence is the global threat to security and privacy enabled by our digital reliance. Specifically, modern secure communications and authentication suffer from formidable threats arising from the potential for copying of secret keys stored in digital media. With relatively little transfer of information, an attacker can impersonate a legitimate user, publish malicious software that is automatically accepted as safe by millions of computers, or eavesdrop on countless digital exchanges. To address this vulnerability, a new class of cryptographic devices known as physical unclonable functions (PUFs) are being developed. PUFs are modern realizations of an ancient concept, the physical key, and offer an attractive alternative for digital key storage. A user derives a digital key from the PUF's physical behavior, which is sensitive to physical idiosyncrasies that are beyond fabrication tolerances. Thus, unlike conventional physical keys, a PUF cannot be duplicated and only the holder can extract the digital key. However, emerging machine learning (ML) methods are remarkably adept at learning behavior via training, and if such algorithms can learn to emulate a PUF, then the security is compromised. Unfortunately, such attacks are highly successful against conventional electronic PUFs. Here, we investigate ML attacks against a nonlinear silicon photonic PUF, a novel design that leverages nonlinear optical interactions in chaotic silicon microcavities. First, we investigate these devices' resistance to cloning during fabrication and demonstrate their use as a source of large volumes of cryptographic key material. Next, we demonstrate that silicon photonic PUFs exhibit resistance to state-of-the-art ML attacks due to their nonlinearity and finally validate this resistance in an encryption scenario. (C) 2020 Author(s).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要