Self-Supervised Pretraining Vision Transformer With Masked Autoencoders for Building Subsurface Model

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING(2023)

引用 0|浏览4
暂无评分
摘要
Building subsurface models is a very important but challenging task in hydrocarbon exploration and development. The subsurface elastic properties are usually sourced from seismic data and well logs. Thus, we design a deep learning (DL) framework using vision transformer (ViT) as the backbone architecture to build the subsurface model using well log information as we apply full waveform inversion (FWI) on the seismic data. However, training a ViT network from scratch with limited well log data can be difficult to achieve good generalization. To overcome this, we implement an efficient self-supervised pretraining process using a masked autoencoder (MAE) architecture to learn important feature representations in seismic volumes. The seismic volumes required by the pretraining are randomly extracted from a seismic inversion, such as an FWI result. We can also incorporate reverse time migration (RTM) image into the seismic volumes to provide additional structure information. The pretraining task of MAE is to reconstruct the original image from the masked image with a masking ratio of 75%. This pretraining task enables the network to learn the high-level latent representations. After the pretraining process, we then fine-tune the ViT network to build the optimal mapping relationship between 2-D seismic volumes and 1-D well segments. Once the fine-tuning process is finished, we apply the trained ViT network to the whole seismic inversion domain to predict the subsurface model. At last, we use one synthetic dataset and two field datasets to test the performance of the proposed method. The test results demonstrate that the proposed method effectively integrates seismic and well information to improve the resolution and accuracy of the velocity model.
更多
查看译文
关键词
Full waveform inversion (FWI),masked autoencoders (MAEs),velocity model building,vision transformer (ViT),well information
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要