Chrome Extension
WeChat Mini Program
Use on ChatGLM

CC+: a Relational Database of Coiled-Coil Structures

Nucleic Acids Research(2009)SCI 2区

Univ Bristol

Cited 144|Views37
Abstract
We introduce the CC+ Database, a detailed, searchable repository of coiled-coil assignments, which is freely available at http://coiledcoils.chm.bris.ac.uk/ccplus. Coiled coils were identified using the program SOCKET, which locates coiled coils based on knobs-into-holes packing of side chains between α-helices. A method for determining the overall sequence identity of coiled-coil sequences was introduced to reduce statistical bias inherent in coiled-coil data sets. There are two points of entry into the CC+ Database: the ‘Periodic Table of Coiled-coil Structures’, which presents a graphical path through coiled-coil space based on manually validated data, and the ‘Dynamic Interface’, which allows queries of the database at different levels of complexity and detail. The latter entry level, which is the focus of this article, enables the efficient and rapid compilation of subsets of coiled-coil structures. These can be created and interrogated with increasingly sophisticated pull-down, keyword and sequence-based searches to return detailed structural and sequence information. Also provided are means for outputting the retrieved coiled-coil data in various formats, including PyMOL and RasMol scripts, and Position-Specific Scoring Matrices (or amino-acid profiles), which may be used, for example, in protein-structure prediction.
More
Translated text
Key words
Support Vector Machines
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:论文指出在处理不平衡数据集时,精确度-召回率(PRC)图比ROC图更能准确反映二元分类器的性能。

方法】:作者通过分析ROC图和PRC图在视觉解读和性能评价上的差异,说明了PRC图在评估分类器在不平衡数据集上的表现时更为可靠。

实验】:论文没有具体描述实验过程,但提出通过比较分类器在不同特异性水平下的表现,发现ROC图可能误导对分类性能的评价,而PRC图则能更准确地预测未来的分类性能。文中未提及使用的数据集名称。