Limited Discriminator GAN using explainable AI model for overfitting problem

被引:6
|
作者
Kim, Jiha [1 ]
Park, Hyunhee [1 ]
机构
[1] Myongji Univ, Dept Informat & Commun Engn, Yongin, South Korea
来源
ICT EXPRESS | 2023年 / 9卷 / 02期
关键词
GAN; Discriminator; Generator; Overfitting; Explainable AI;
D O I
10.1016/j.icte.2021.12.014
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven learning is the most representative deep learning method. Generative adversarial networks (GANs) are designed to generate sufficient data to support such learning. The learning process of GAN models typically trains a generator and discriminator in turn. However, overfitting problems occur when the discriminator depends excessively on the training data. When this problem persists, the image created by the generator shows a similar appearance to the learning image. Images similar to learning images eventually lose the meaning of data augmentation. In this paper, we propose a limited discriminator GAN (LDGAN) model that explains the results of GAN, which is a model that cannot be analyzed externally, such as a black box. The part explained in LDGAN becomes the discriminator model of GAN, and it is possible to check which area of the image is used as the basis for determining fake/real by the discriminator. In the end, a method for limiting the learning of discriminator is proposed based on the described results. Through this, it is possible to avoid the overfitting problem of the discriminator and to generate various images different from the learning image. The LDGAN method allows users to perform meaningful data augmentation with only specific objects except for complex images or backgrounds that require analysis. Compare the LDGAN method with the existing DCGAN and present the extensive simulation results. The extensive simulation result shows that the image generated by the proposed LDGAN including the estimation area is about 10% more. (c) 2022 The Author(s). Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:241 / 246
页数:6
相关论文
共 50 条
  • [1] Deep Learning-driven Explainable AI using Generative Adversarial Network (GAN)
    Maan, Jitendra
    2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [2] Developing a Transparent Diagnosis Model for Diabetic Retinopathy Using Explainable AI
    Shahzad, Tariq
    Saleem, Muhammad
    Farooq, Muhammad Sajid
    Abbas, Sagheer
    Khan, Muhammad Adnan
    Ouahada, Khmaies
    IEEE ACCESS, 2024, 12 : 149700 - 149709
  • [3] AN EXPLAINABLE AI MODEL IN HEART DISEASE CLASSIFICATION USING GREY WOLF OPTIMIZATION
    Varun G.
    Jagadeeshwaran J.
    Nithish K.
    Sanjey D.S.A.
    Venkatesh V.
    Ashokkumar P.
    Scalable Computing, 2024, 25 (04): : 3139 - 3151
  • [4] Explainable AI Using MAP-Independence
    Kwisthout, Johan
    SYMBOLIC AND QUANTITATIVE APPROACHES TO REASONING WITH UNCERTAINTY, ECSQARU 2021, 2021, 12897 : 243 - 254
  • [5] A Federated Explainable AI Model for Breast Cancer Classification
    Briola, Eleni
    Nikolaidis, Christos Chrysanthos
    Perifanis, Vasileios
    Pavlidis, Nikolaos
    Efraimidis, Pavlos S.
    PROCEEDINGS OF THE 2024 EUROPEAN INTERDISCIPLINARY CYBERSECURITY CONFERENCE, EICC 2024, 2024, : 194 - 201
  • [6] A Novel Explainable AI Model for Medical Data Analysis
    Shakhovska, Nataliya
    Shebeko, Andrii
    Prykarpatskyy, Yarema
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2024, 14 (02) : 121 - 137
  • [7] An explainable AI (XAI) model for landslide susceptibility modeling
    Pradhan, Biswajeet
    Dikshit, Abhirup
    Lee, Saro
    Kim, Hyesu
    APPLIED SOFT COMPUTING, 2023, 142
  • [8] An Explainable AI Model for Interpretable Lung Disease Classification
    Pitroda, Vidhi
    Fouda, Mostafa M.
    Fadlullah, Zubair Md
    2021 IEEE INTERNATIONAL CONFERENCE ON INTERNET OF THINGS AND INTELLIGENCE SYSTEMS (IOTAIS), 2021, : 98 - 103
  • [9] GAN-Based Image Deblurring Using DCT Discriminator
    Tomosada, Hiroki
    Kudo, Takahiro
    Fujisawa, Takanori
    Ikehara, Masaaki
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 3675 - 3681
  • [10] Interpreting the Factors of Employee Attrition using Explainable AI
    Sekaran, Karthik
    Shanmugam, S.
    2022 INTERNATIONAL CONFERENCE ON DECISION AID SCIENCES AND APPLICATIONS (DASA), 2022, : 932 - 936