Limited Discriminator GAN using explainable AI model for overfitting problem

被引:7
作者
Kim, Jiha [1 ]
Park, Hyunhee [1 ]
机构
[1] Myongji Univ, Dept Informat & Commun Engn, Yongin, South Korea
关键词
GAN; Discriminator; Generator; Overfitting; Explainable AI;
D O I
10.1016/j.icte.2021.12.014
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven learning is the most representative deep learning method. Generative adversarial networks (GANs) are designed to generate sufficient data to support such learning. The learning process of GAN models typically trains a generator and discriminator in turn. However, overfitting problems occur when the discriminator depends excessively on the training data. When this problem persists, the image created by the generator shows a similar appearance to the learning image. Images similar to learning images eventually lose the meaning of data augmentation. In this paper, we propose a limited discriminator GAN (LDGAN) model that explains the results of GAN, which is a model that cannot be analyzed externally, such as a black box. The part explained in LDGAN becomes the discriminator model of GAN, and it is possible to check which area of the image is used as the basis for determining fake/real by the discriminator. In the end, a method for limiting the learning of discriminator is proposed based on the described results. Through this, it is possible to avoid the overfitting problem of the discriminator and to generate various images different from the learning image. The LDGAN method allows users to perform meaningful data augmentation with only specific objects except for complex images or backgrounds that require analysis. Compare the LDGAN method with the existing DCGAN and present the extensive simulation results. The extensive simulation result shows that the image generated by the proposed LDGAN including the estimation area is about 10% more. (c) 2022 The Author(s). Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:241 / 246
页数:6
相关论文
共 50 条
[31]   A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI [J].
Barbalau, Antonio ;
Cosma, Adrian ;
Ionescu, Radu Tudor ;
Popescu, Marius .
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 :190-205
[32]   Explainable AI in Machine Learning Regression: Creating Transparency of a Regression Model [J].
Nakatsu, Robbie T. .
HCI IN BUSINESS, GOVERNMENT AND ORGANIZATIONS, PT I, HCIBGO 2024, 2024, 14720 :223-236
[33]   Detection of Adversarial Attacks in AI-Based Intrusion Detection Systems Using Explainable AI [J].
Tcydenova, Erzhena ;
Kim, Tae Woo ;
Lee, Changhoon ;
Park, Jong Hyuk .
HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2021, 11
[34]   Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis [J].
Wang, Deliang ;
Chen, Gaowei .
IEEE TRANSACTIONS ON EDUCATION, 2024, 67 (06) :907-918
[35]   Determination Method of Optimal Reserve Margin Based on Explainable AI Using Gaussian Process Regression Model and SHAP [J].
Nishida, Keito ;
Shigenobu, Ryuto ;
Takahashi, Akiko ;
Ito, Masakazu ;
Taoka, Hisao ;
Kanao, Norikazu ;
Sugimoto, Hitoshi .
ELECTRICAL ENGINEERING IN JAPAN, 2025, 218 (02)
[36]   Determination Method of Optimal Reserve Margin based on Explainable AI using Gaussian Process Regression Model and SHAP [J].
Nishida, Keito ;
Shigenobu, Ryuto ;
Takahashi, Akiko ;
Ito, Masakazu ;
Taoka, Hisao ;
Kanao, Norikazu ;
Sugimoto, Hitoshi .
IEEJ Transactions on Power and Energy, 2025, 145 (02) :226-238
[37]   Significance of predictors: revisiting stock return predictions using explainable AI [J].
Goswami, Bhaskar ;
Uddin, Ajim .
ANNALS OF OPERATIONS RESEARCH, 2025,
[38]   Uncovering the Intricacies and Synergies of Processor Microarchitecture Mechanisms Using Explainable AI [J].
Gamatie, Abdoulaye ;
Wang, Yuyang ;
Duran, Diego Valdez .
IEEE TRANSACTIONS ON COMPUTERS, 2025, 74 (02) :637-651
[39]   Deep-BIAS: Detecting Structural Bias using Explainable AI [J].
van Stein, Bas ;
Vermetten, Diederick ;
Caraffini, Fabio ;
Kononova, Anna V. .
PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2023 COMPANION, 2023, :455-458
[40]   ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI [J].
Hess, Samuel ;
Ditzler, Gregory .
JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24