Pre-training Antibody Language Models for Antigen-Specific Computational Antibody Design.

KDD(2023)

引用 2|浏览239
暂无评分
摘要
Antibodies are proteins that effectively protect the human body by binding to pathogens. Recently, deep learning-based computational antibody design has attracted popular attention since it automatically mines the antibody patterns from data that could be complementary to human experiences. However, the computational methods heavily rely on high-quality antibody structure data, which is quite limited. Besides, the complementarity-determining region (CDR), which is the key component of an antibody that determines the specificity and binding affinity, is highly variable and hard to predict. Therefore, the limited availability of high-quality antibody structure data exacerbates the difficulty of CDR generation. Fortunately, there is a large amount of sequence data for antibodies that can help model the CDR and reduce reliance on structure data. By witnessing the success of pre-training models for protein modeling, in this paper, we develop the antibody pre-training language model and incorporate it into the antigen-specific antibody design model in a systemic way. Specifically, we first pre-train a novel antibody language model based on the sequence data, then propose a oneshot way for sequence and structure generation of CDR to mitigate the high cost and error propagation associated with autoregressive methods, and finally leverage the pre-trained antibody model for the antigen-specific antibody generation model with some carefully designed modules. Our experiments demonstrate the superiority of our method over previous baselines in tasks such as sequence and structure generation, CDR-H3 design for antigen binding, and antibody optimization1.
更多
查看译文
关键词
Antibody sequence-structure co-design,Antibody pre-training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要