A Super-Resolution Reconstruction Method for Shale Based on Generative Adversarial Network

被引:7
作者
Zhang, Ting [1 ]
Hu, Guangshun [1 ]
Yang, Yi [1 ]
Du, Yi [2 ]
机构
[1] Shanghai Univ Elect Power, Coll Comp Sci & Technol, Shanghai 200090, Peoples R China
[2] Shanghai Polytech Univ, Inst Artificial Intelligence, Sch Comp & Informat Engn, Shanghai 201209, Peoples R China
基金
中国国家自然科学基金;
关键词
Shale; Generative adversarial network; Feature loss; Super-resolution; Digital core; POROUS-MEDIA; SIMULATION; IMAGES; FLOW;
D O I
10.1007/s11242-023-02016-1
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Shale gas, as a kind of unconventional resources, plays an important role in meeting global energy demands and is expected to do so for decades as human society is moving toward low-carbon energies. During the energy transition, the rapid growth of shale gas helps to maintain the security of supply and lower prices of energies for consumers. The organic matters contained in shale are the physical basis for the production of shale gas, and the large number of pores and fractures in shale provide storage space for shale gas. Therefore, shale rocks are both source rocks for gas generation and reservoirs for gathering and preserving gas. Accurate characterization and modeling of the internal spatial structure of shale is of great significance for the exploitation of shale gas. However, as shale contains a variety of minerals and organic matters, and the scales of pores and fractures vary widely, using conventional numerical methods to characterize and reconstruct the internal structures of shale is challenging. When reconstructing some larger results with high resolutions, the contradiction between the limited field of view (FOV) and the resolution of training images becomes more acute. With the development of deep learning and one of its variants, generative adversarial network (GAN), the reconstruction of shale may benefit from their strong learning and generation abilities. In this paper, a model of GAN combined with residual networks (3DRGAN) is proposed to reconstruct 3D digital cores of shale. The super-resolution image that is much bigger than training images is obtained, solving the contradiction between FOV and the resolution of training images. Compared to some typical methods, the results reconstructed by 3DRGAN are closer to the real shale sample, showing its favorable advantage.
引用
收藏
页码:383 / 426
页数:44
相关论文
共 40 条
[11]   Evaluation of Highly Thermally Mature Shale-Gas Reservoirs in Complex Structural Parts of the Sichuan Basin [J].
Guo, Tonglou .
JOURNAL OF EARTH SCIENCE, 2013, 24 (06) :863-873
[12]   Quantifying damage in polycrystalline ice via X-Ray computed micro-tomography [J].
Hammonds, Kevin ;
Baker, Ian .
ACTA MATERIALIA, 2017, 127 :463-470
[13]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[14]  
Hidajat I., 2001, SPE LAT AM CAR PETR, DOI [10.2118/69623-MS, DOI 10.2118/69623-MS]
[15]  
Gulrajani I, 2017, ADV NEUR IN, V30
[16]   Perceptual Losses for Real-Time Style Transfer and Super-Resolution [J].
Johnson, Justin ;
Alahi, Alexandre ;
Li Fei-Fei .
COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 :694-711
[17]  
Kim J, 2016, PROC CVPR IEEE, P1637, DOI [10.1109/CVPR.2016.182, 10.1109/CVPR.2016.181]
[18]   Spatial connectivity: From variograms to multiple-point measures [J].
Krishnan, S ;
Journel, AG .
MATHEMATICAL GEOLOGY, 2003, 35 (08) :915-925
[19]   Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network [J].
Ledig, Christian ;
Theis, Lucas ;
Huszar, Ferenc ;
Caballero, Jose ;
Cunningham, Andrew ;
Acosta, Alejandro ;
Aitken, Andrew ;
Tejani, Alykhan ;
Totz, Johannes ;
Wang, Zehan ;
Shi, Wenzhe .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :105-114
[20]   MorphoLibJ: integrated library and plugins for mathematical morphology with ImageJ [J].
Legland, David ;
Arganda-Carreras, Ignacio ;
Andrey, Philippe .
BIOINFORMATICS, 2016, 32 (22) :3532-3534