RL-OPC: Mask Optimization With Deep Reinforcement Learning

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS(2024)

引用 0|浏览5
暂无评分
摘要
Mask optimization is a vital step in the VLSI manufacturing process in advanced technology nodes. As one of the most representative techniques, optical proximity correction (OPC) is widely applied to enhance printability. Since conventional OPC methods consume prohibitive computational overhead, recent research has applied machine learning techniques for efficient mask optimization. However, existing discriminative learning models rely on a given dataset for supervised training, and generative learning models usually leverage a proxy optimization objective for end-to-end learning, which may limit the feasibility. In this article, we pioneer introducing the reinforcement learning (RL) model for mask optimization, which directly optimizes the preferred objective without leveraging a differentiable proxy. Intensive experiments show that our method outperforms state-of-the-art solutions, including academic approaches and commercial toolkits.
更多
查看译文
关键词
Optimization,Computational modeling,Measurement,Lithography,Robustness,Very large scale integration,Q-learning,Design for manufacturing,mask optimization,reinforcement learning (RL),optical proximity correction (OPC)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要