Limited Discriminator GAN using explainable AI model for overfitting problem

被引:6
作者
Kim, Jiha [1 ]
Park, Hyunhee [1 ]
机构
[1] Myongji Univ, Dept Informat & Commun Engn, Yongin, South Korea
来源
ICT EXPRESS | 2023年 / 9卷 / 02期
关键词
GAN; Discriminator; Generator; Overfitting; Explainable AI;
D O I
10.1016/j.icte.2021.12.014
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven learning is the most representative deep learning method. Generative adversarial networks (GANs) are designed to generate sufficient data to support such learning. The learning process of GAN models typically trains a generator and discriminator in turn. However, overfitting problems occur when the discriminator depends excessively on the training data. When this problem persists, the image created by the generator shows a similar appearance to the learning image. Images similar to learning images eventually lose the meaning of data augmentation. In this paper, we propose a limited discriminator GAN (LDGAN) model that explains the results of GAN, which is a model that cannot be analyzed externally, such as a black box. The part explained in LDGAN becomes the discriminator model of GAN, and it is possible to check which area of the image is used as the basis for determining fake/real by the discriminator. In the end, a method for limiting the learning of discriminator is proposed based on the described results. Through this, it is possible to avoid the overfitting problem of the discriminator and to generate various images different from the learning image. The LDGAN method allows users to perform meaningful data augmentation with only specific objects except for complex images or backgrounds that require analysis. Compare the LDGAN method with the existing DCGAN and present the extensive simulation results. The extensive simulation result shows that the image generated by the proposed LDGAN including the estimation area is about 10% more. (c) 2022 The Author(s). Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:241 / 246
页数:6
相关论文
共 50 条
  • [21] Explainable AI for Bipolar Disorder Diagnosis Using Hjorth Parameters
    Torbati, Mehrnaz Saghab
    Zandbagleh, Ahmad
    Daliri, Mohammad Reza
    Ahmadi, Amirmasoud
    Rostami, Reza
    Kazemi, Reza
    DIAGNOSTICS, 2025, 15 (03)
  • [22] Ensemble deep learning model for protein secondary structure prediction using NLP metrics and explainable AI
    Vignesh, U.
    Parvathi, R.
    Ram, K. Gokul
    RESULTS IN ENGINEERING, 2024, 24
  • [23] Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study
    Heider, Michael
    Stegherr, Helena
    Nordsieck, Richard
    Haehner, Joerg
    ARTIFICIAL LIFE, 2023, 29 (04) : 468 - 486
  • [24] Development of Neural Network Model With Explainable AI for Measuring Uranium Enrichment
    Ryu, Jichang
    Park, Chanjun
    Park, Jungsuk
    Cho, Namchan
    Park, Jaehyun
    Cho, Gyuseong
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2021, 68 (11) : 2670 - 2681
  • [25] Explainable AI for Intrusion Detection Systems: A Model Development and Experts' Evaluation
    Durojaye, Henry
    Naiseh, Mohammad
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, INTELLISYS 2024, 2024, 1066 : 301 - 318
  • [26] A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI
    Barbalau, Antonio
    Cosma, Adrian
    Ionescu, Radu Tudor
    Popescu, Marius
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 190 - 205
  • [27] Explainable AI in Machine Learning Regression: Creating Transparency of a Regression Model
    Nakatsu, Robbie T.
    HCI IN BUSINESS, GOVERNMENT AND ORGANIZATIONS, PT I, HCIBGO 2024, 2024, 14720 : 223 - 236
  • [28] Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis
    Wang, Deliang
    Chen, Gaowei
    IEEE TRANSACTIONS ON EDUCATION, 2024, 67 (06) : 907 - 918
  • [29] Detection of Adversarial Attacks in AI-Based Intrusion Detection Systems Using Explainable AI
    Tcydenova, Erzhena
    Kim, Tae Woo
    Lee, Changhoon
    Park, Jong Hyuk
    HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2021, 11
  • [30] Uncovering the Intricacies and Synergies of Processor Microarchitecture Mechanisms Using Explainable AI
    Gamatie, Abdoulaye
    Wang, Yuyang
    Duran, Diego Valdez
    IEEE TRANSACTIONS ON COMPUTERS, 2025, 74 (02) : 637 - 651