谷歌浏览器插件
订阅小程序
在清言上使用

stGCL: A versatile cross-modality fusion method based on multi-modal graph contrastive learning for spatial transcriptomics

biorxiv(2023)

引用 0|浏览7
暂无评分
摘要
Advances in spatial transcriptomics (ST) technologies have provided unprecedented opportunities to depict transcriptomic and histological landscapes in the spatial context. Multi-modal ST data provide abundant and comprehensive information about cellular status, function, and organization. However, in dealing with the processing and analysis of spatial transcriptomics data, existing algorithms struggle to effectively fuse the multi-modal information contained within ST data. Here, we propose a graph contrastive learning-based cross-modality fusion model named stGCL for accurate and robust integrating gene expression, spatial information as well as histological profiles simultaneously. stGCL adopts a novel histology-based Vision Transformer (H-ViT) method to effectively encode histological features and combines multi-modal graph attention auto-encoder (GATE) with contrastive learning to fuse cross-modality features. In addition, stGCL introduces a pioneering spatial coordinate correcting and registering strategy for tissue slices integration, which can reduce batch effects and identify cross-sectional domains precisely. Compared with state-of-the-art methods on spatial transcriptomics data across platforms and resolutions, stGCL achieves a superior clustering performance and is more robust in unraveling spatial patterns of biological significance. Additionally, stGCL successfully reconstructed three-dimensional (3D) brain tissue structures by integrating vertical and horizontal slices respectively. Application of stGCL in human bronchiolar adenoma (BA) data reveals intratumor spatial heterogeneity and identifies candidate gene biomarkers. In summary, stGCL enables the fusion of various spatial modality data and is a powerful tool for analytical tasks such as spatial domain identification and multi-slice integration. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要