Tokamak visible image sequence recognition using nonlocal spatio-temporal CNN for attention needed area localization*

被引:6
作者
Kwon, Giil [1 ]
Wi, Hanmin [1 ]
Hong, Jaesic [1 ]
机构
[1] Natl Fus Res Inst, Control Team, Daejeon, South Korea
关键词
Tokamak visible image diagnostic system; Deep learning; Video classification; SYSTEM;
D O I
10.1016/j.fusengdes.2021.112375
中图分类号
TL [原子能技术]; O571 [原子核物理学];
学科分类号
0827 ; 082701 ;
摘要
In this paper, we report a study that was conducted to explore the feasibility of developing a system that classifies the image sequence as 'disruptive' shot images or 'non-disruptive' shot images. The classifier identifies an image sequence as 'non-disruptive' shots or 'disruptive' shots using a non-local spatio-temporal 3D convolution neural network (CNN) from image sequences from the Tokamak visible imaging diagnostic system. To analyze the classification result, we localize an area that has contributed to the classification of an image sequence. We use class activation mapping (CAM) for CNN with global average pooling to localize the area. To train this classifier, we created a plasma disruption image sequence dataset using the data acquired from the KSTAR experiment. This classifier recognized disruption image sequences on the test dataset with 91.11% accuracy. Analysis of the CAMs of these image sequences revealed that this classifier recognizes the disruption of the plasma with a relative change in brightness over time in areas other than the plasma area of the image. Through this work, we will be able to develop a system that automatically classifies plasma disruption image sequence after experiments.
引用
收藏
页数:10
相关论文
共 20 条
[1]   Optimization of an in-vessel visible inspection system for a long-pulse operation in KSTAR [J].
Chung, J. ;
Wi, H. ;
Nam, Y. U. ;
Hong, S. H. .
FUSION ENGINEERING AND DESIGN, 2014, 89 (04) :349-353
[2]   Deep convolutional neural networks for multi-scale time-series classification and application to tokamak disruption prediction using raw, high temporal resolution diagnostic data [J].
Churchill, R. M. ;
Tobias, B. ;
Zhu, Y. .
PHYSICS OF PLASMAS, 2020, 27 (06)
[3]  
Farley T., 2019, 3 IAEA TECHN M FUS D
[4]   SlowFast Networks for Video Recognition [J].
Feichtenhofer, Christoph ;
Fan, Haoqi ;
Malik, Jitendra ;
He, Kaiming .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6201-6210
[5]   Deep Learning for Plasma Tomography and Disruption Prediction From Bolometer Data [J].
Ferreira, Diogo R. ;
Carvalho, Pedro J. ;
Fernandes, Horacio .
IEEE TRANSACTIONS ON PLASMA SCIENCE, 2020, 48 (01) :36-45
[6]   Disruption prediction using a full convolutional neural network on EAST [J].
Guo, B. H. ;
Shen, B. ;
Chen, D. L. ;
Rea, C. ;
Granetz, R. S. ;
Huang, Y. ;
Zeng, L. ;
Zhang, H. ;
Qian, J. P. ;
Sun, Y. W. ;
Xiao, B. J. .
PLASMA PHYSICS AND CONTROLLED FUSION, 2021, 63 (02)
[7]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[8]   Efficient Parallel Inflated 3D Convolution Architecture for Action Recognition [J].
Huang, Yukun ;
Guo, Yongcai ;
Gao, Chao .
IEEE ACCESS, 2020, 8 :45753-45765
[9]   Predicting disruptive instabilities in controlled fusion plasmas through deep learning [J].
Kates-Harbeck, Julian ;
Svyatkovskiy, Alexey ;
Tang, William .
NATURE, 2019, 568 (7753) :526-+
[10]   Overview video diagnostics for the W7-X stellarator [J].
Kocsis, G. ;
Baross, T. ;
Biedermann, C. ;
Bodnar, G. ;
Cseh, G. ;
Ilkei, T. ;
Koenig, R. ;
Otte, M. ;
Szabolics, T. ;
Szepesi, T. ;
Zoletnik, S. .
FUSION ENGINEERING AND DESIGN, 2015, 96-97 :808-811