Physics-driven self-supervised learning system for seismic velocity inversion

被引:0
作者
Liu, Bin [1 ,2 ,3 ]
Jiang, Peng [4 ]
Wang, Qingyang [4 ]
Ren, Yuxiao [2 ]
Yang, Senlin [1 ,4 ]
Cohn, Anthony G. [2 ,5 ]
机构
[1] Shandong Univ, Geotech & Struct Engn Res Ctr, Jinan, Peoples R China
[2] Shandong Univ, Sch Civil Engn, Jinan, Peoples R China
[3] Shandong Univ, Data Sci Inst, Jinan, Peoples R China
[4] Shandong Univ, Sch Qilu Transportat, Jinan, Peoples R China
[5] Univ Leeds, Sch Comp, Leeds, England
基金
中国国家自然科学基金;
关键词
WAVE-FORM INVERSION; NEURAL-NETWORK; MODEL; FRAMEWORK;
D O I
10.1190/GEO2021-0302.1
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Seismic velocity inversion plays a vital role in various applied seismology processes. A series of deep learning methods have been developed that rely purely on manually provided labels for supervision; however, their performances depend heavily on us-ing large training data sets with corresponding velocity models. Because no physical laws are used in the training phase, it is usually challenging to generalize trained neural networks to a new data domain. To mitigate these issues, we have embedded a seismic forward modeling step at the end of a network to re -map the inversion result back to seismic data and thus train the neural network through self-supervised loss, i.e., the misfit be-tween the network input and output. As a result, we eliminate the need for many labeled velocity models, and physical laws are introduced when back-propagating gradients through the seismic forward modeling step. We verify the effectiveness of our approach through comprehensive experiments on syn-thetic data sets, where self-supervised learning outperforms the fully supervised approach, which accesses much more la-beled data. The superior performance is even more significant when compared with a new data domain that has velocity mod-els with faults and more geologic layers. Finally, in case of un-known and more complex data types, we develop a network -constrained full-waveform inversion (FWI) method. This method refines the initial prediction of the network by iteratively optimizing network parameters other than the velocity model, as found with the conventional FWI method, and demonstrates clear advantages in terms of interface and velocity accuracy. With these measures (self-supervised learning and network -con-strained FWI), our physics-driven self-supervised learning sys-tem successfully mitigates issues such as the dependence on large labeled data sets, the absence of physical laws, and the difficulty in adapting to new data domains.
引用
收藏
页码:R145 / R161
页数:17
相关论文
共 50 条
[21]   Supervised and self-supervised learning-based cascade spatiotemporal fusion framework and its application [J].
Sun, Weixuan ;
Li, Jie ;
Jiang, Menghui ;
Yuan, Qiangqiang .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2023, 203 :19-36
[22]   Structural representation learning for network alignment with self-supervised anchor links [J].
Thanh Toan Nguyen ;
Minh Tam Pham ;
Thanh Tam Nguyen ;
Thanh Trung Huynh ;
Van Vinh Tong ;
Quoc Viet Hung Nguyen ;
Thanh Tho Quan .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 165
[23]   Learning-Based Seismic Velocity Inversion with Synthetic and Field Data [J].
Farris, Stuart ;
Clapp, Robert ;
Araya-Polo, Mauricio .
SENSORS, 2023, 23 (19)
[24]   Building initial model for seismic inversion based on semi-supervised learning [J].
Sun, Qianhao ;
Zong, Zhaoyun .
GEOPHYSICAL PROSPECTING, 2024, 72 (05) :1800-1815
[25]   Self-Enhancement Learning: Self-Supervised and Target-Creating Learning [J].
Kamimura, Ryotaro .
IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, :681-687
[26]   Physics-Guided Dual Self-Supervised Learning for Structure-Based Material Property Prediction [J].
Fu, Nihang ;
Wei, Lai ;
Hu, Jianjun .
JOURNAL OF PHYSICAL CHEMISTRY LETTERS, 2024, 15 (10) :2841-2850
[27]   Appearance Consensus Driven Self-supervised Human Mesh Recovery [J].
Kundu, Jogendra Nath ;
Rakesh, Mugalodi ;
Jampani, Varun ;
Venkatesh, Rahul Mysore ;
Babu, R. Venkatesh .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :794-812
[28]   Self-supervised ensembled learning for autism spectrum classification [J].
Gaur, Manu ;
Chaturvedi, Kunal ;
Vishwakarma, Dinesh Kumar ;
Ramasamy, Savitha ;
Prasad, Mukesh .
RESEARCH IN AUTISM SPECTRUM DISORDERS, 2023, 107
[29]   Self-Supervised Contrastive Learning In Spiking Neural Networks [J].
Bahariasl, Yeganeh ;
Kheradpisheh, Saeed Reza .
PROCEEDINGS OF THE 13TH IRANIAN/3RD INTERNATIONAL MACHINE VISION AND IMAGE PROCESSING CONFERENCE, MVIP, 2024, :181-185
[30]   Transferable Self-Supervised Instance Learning for Sleep Recognition [J].
Zhao, Aite ;
Wang, Yue ;
Li, Jianbo .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :4464-4477