Limited Discriminator GAN using explainable AI model for overfitting problem

被引:7
作者
Kim, Jiha [1 ]
Park, Hyunhee [1 ]
机构
[1] Myongji Univ, Dept Informat & Commun Engn, Yongin, South Korea
关键词
GAN; Discriminator; Generator; Overfitting; Explainable AI;
D O I
10.1016/j.icte.2021.12.014
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven learning is the most representative deep learning method. Generative adversarial networks (GANs) are designed to generate sufficient data to support such learning. The learning process of GAN models typically trains a generator and discriminator in turn. However, overfitting problems occur when the discriminator depends excessively on the training data. When this problem persists, the image created by the generator shows a similar appearance to the learning image. Images similar to learning images eventually lose the meaning of data augmentation. In this paper, we propose a limited discriminator GAN (LDGAN) model that explains the results of GAN, which is a model that cannot be analyzed externally, such as a black box. The part explained in LDGAN becomes the discriminator model of GAN, and it is possible to check which area of the image is used as the basis for determining fake/real by the discriminator. In the end, a method for limiting the learning of discriminator is proposed based on the described results. Through this, it is possible to avoid the overfitting problem of the discriminator and to generate various images different from the learning image. The LDGAN method allows users to perform meaningful data augmentation with only specific objects except for complex images or backgrounds that require analysis. Compare the LDGAN method with the existing DCGAN and present the extensive simulation results. The extensive simulation result shows that the image generated by the proposed LDGAN including the estimation area is about 10% more. (c) 2022 The Author(s). Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:241 / 246
页数:6
相关论文
共 50 条
[21]   From Explainable AI to Explainable Simulation: Using Machine Learning and XAI to understand System Robustness [J].
Feldkamp, Niclas ;
Strassburger, Steffen .
PROCEEDINGS OF THE 2023 ACM SIGSIM INTERNATIONAL CONFERENCE ON PRINCIPLES OF ADVANCED DISCRETE SIMULATION, ACMSIGSIM-PADS 2023, 2023, :96-106
[22]   Using Explainable AI to Understand Team Formation and Team Impact [J].
Xu H. ;
Saar-Tsechansky M. ;
Song M. ;
Ding Y. .
Proceedings of the Association for Information Science and Technology, 2023, 60 (01) :469-478
[23]   Interpretation of net promoter score attributes using explainable AI [J].
Rallis, Ioannis ;
Markoulidakis, Yannis ;
Georgoulas, Ioannis ;
Kopsiaftis, George ;
Kaselimi, Maria ;
Doulamis, Nikolaos ;
Doulamis, Anastasios .
PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS, PETRA 2022, 2022, :113-117
[24]   DDoS Attack Detection Using Explainable AI in Machine Learning [J].
Jena, Sudhansu Kumar ;
Ranjan, Ashish ;
Singh, Vibhav Prakash .
ADVANCED NETWORK TECHNOLOGIES AND INTELLIGENT COMPUTING, ANTIC 2024, PT IV, 2025, 2336 :3-16
[25]   Predicting life satisfaction using machine learning and explainable AI [J].
Khan, Alif Elham ;
Hasan, Mohammad Junayed ;
Anjum, Humayra ;
Mohammed, Nabeel ;
Momen, Sifat .
HELIYON, 2024, 10 (10)
[26]   Explainable AI for Bipolar Disorder Diagnosis Using Hjorth Parameters [J].
Torbati, Mehrnaz Saghab ;
Zandbagleh, Ahmad ;
Daliri, Mohammad Reza ;
Ahmadi, Amirmasoud ;
Rostami, Reza ;
Kazemi, Reza .
DIAGNOSTICS, 2025, 15 (03)
[27]   Ensemble deep learning model for protein secondary structure prediction using NLP metrics and explainable AI [J].
Vignesh, U. ;
Parvathi, R. ;
Ram, K. Gokul .
RESULTS IN ENGINEERING, 2024, 24
[28]   Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study [J].
Heider, Michael ;
Stegherr, Helena ;
Nordsieck, Richard ;
Haehner, Joerg .
ARTIFICIAL LIFE, 2023, 29 (04) :468-486
[29]   Development of Neural Network Model With Explainable AI for Measuring Uranium Enrichment [J].
Ryu, Jichang ;
Park, Chanjun ;
Park, Jungsuk ;
Cho, Namchan ;
Park, Jaehyun ;
Cho, Gyuseong .
IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2021, 68 (11) :2670-2681
[30]   Explainable AI for Intrusion Detection Systems: A Model Development and Experts' Evaluation [J].
Durojaye, Henry ;
Naiseh, Mohammad .
INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, INTELLISYS 2024, 2024, 1066 :301-318