StorSeismic: A New Paradigm in Deep Learning for Seismic Processing

被引:37
作者
Harsuko, Randy [1 ]
Alkhalifah, Tariq A. [1 ]
机构
[1] King Abdullah Univ Sci & Technol KAUST, Thuwal, Saudi Arabia
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2022年 / 60卷
关键词
Task analysis; Transformers; Bit error rate; Training; Computer architecture; Natural language processing; Machine learning algorithms; Inversion; machine learning (ML); seismic processing; self-supervised learning; transformer; INTERPOLATION;
D O I
10.1109/TGRS.2022.3216660
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Machine learned tasks on seismic data are often trained sequentially and separately, even though they utilize the same features (i.e., geometrical) of the data. We present StorSeismic as a dataset-centric framework for seismic data processing, which consists of neural network (NN) pretraining and fine-tuning procedures. We, specifically, utilize a NN as a preprocessing tool to extract and store seismic data features of a particular dataset for any downstream tasks. After pretraining, the resulting model can be utilized later, through a fine-tuning procedure, to perform different tasks using limited additional training. Used often in natural language processing (NLP) and lately in vision tasks, bidirectional encoder representations from transformer (BERT), a form of a transformer model, provides an optimal platform for this framework. The attention mechanism of BERT, applied here on a sequence of traces within the shot gather, is able to capture and store key geometrical features of the seismic data. We pretrain StorSeismic on field data, along with synthetically generated ones, in the self-supervised step. Then, we use the labeled synthetic data to fine-tune the pretrained network in a supervised fashion to perform various seismic processing tasks, such as denoising, velocity estimation, first arrival picking, and normal moveout (NMO). Finally, the fine-tuned model is used to obtain satisfactory inference results on the field data.
引用
收藏
页数:15
相关论文
共 49 条
[1]  
Abbeel P., 2021, PREPRINT
[2]  
Abnar S, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P4190
[3]  
Alkhalifah T., 2021, 82 EAGE ANN C EXHIBI, V2021, P1, DOI 10.3997/2214-4609.202113262
[5]  
Brown TB, 2020, ADV NEUR IN, V33
[6]   Improving the Signal-to-Noise Ratio of Seismological Datasets by Unsupervised Machine Learning [J].
Chen, Yangkang ;
Zhang, Mi ;
Bai, Min ;
Chen, Wei .
SEISMOLOGICAL RESEARCH LETTERS, 2019, 90 (04) :1552-1564
[7]   AUDIO ALBERT: A LITE BERT FOR SELF-SUPERVISED LEARNING OF AUDIO REPRESENTATION [J].
Chi, Po-Han ;
Chung, Pei-Hung ;
Wu, Tsung-Han ;
Hsieh, Chun-Cheng ;
Chen, Yen-Hao ;
Li, Shang-Wen ;
Lee, Hung-yi .
2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, :344-350
[8]   Seismic fault detection in real data using transfer learning from a convolutional neural network pre-trained with synthetic seismic data [J].
Cunha, Augusto ;
Pochet, Axelle ;
Lopes, Helio ;
Gattass, Marcelo .
COMPUTERS & GEOSCIENCES, 2020, 135
[9]   Ground-roll suppression using the wavelet transform [J].
Deighan, AJ ;
Watts, DR .
GEOPHYSICS, 1997, 62 (06) :1896-1903
[10]  
Devlin J, 2019, Arxiv, DOI [arXiv:1810.04805, 10.48550/arXiv.1810.04805]