A comprehensive review of recent advances and future prospects of generative AI

被引:0
作者
Gupta, Jyoti [1 ]
Bhutani, Monica [1 ]
Kumar, Pramod [2 ]
Kumar, Mahesh [3 ]
Malhotra, Nisha [3 ]
Malik, Payal [3 ]
Kaul, Kajal [3 ]
机构
[1] Bharati Vidyapeeths Coll Engn, Dept Elect & Commun Engn, New Delhi 110063, India
[2] Ganga Inst Technol & Management, Dept Comp Sci & Engn, Bahadurgarh, Haryana, India
[3] Bharati Vidyapeeths Coll Engn, Dept Informat Technol Engn, New Delhi 110063, India
关键词
Generative AI; Generative models; Machine learning; Deep learning; Artificial intelligence;
D O I
10.47974/JIOS-1864
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
Generative AI has evolved rapidly and demonstrated accuracy in creating content with diverse yet too realistic styles. This paper provides a complete overview of the field, starting with its core principles and continuing with some recent results and potential future applications. This also covers requirements for new task-specific and data models, including our critical generative model generation (GANs, VAE, and more) in four image audio text videos. The paper emphasizes that generative AI has the potential to transform industries and lists some of these possible applications. It also reviews GAN technology limitations, including data bias, ethical questions, and model interpretability. To maximize the potential of this technology, we stress that it is crucial to construct reliable, interpretable, and self- sustainable generative models. The paper ends with a discussion of future directions, new trends in multimodal models, and scopes of energy-efficient approaches. Knowing the risks and potential ahead is how researchers and practitioners can make an informed choice to use generative AI for better or worse.
引用
收藏
页码:205 / 211
页数:7
相关论文
共 12 条
  • [1] Buolamwini J, 2018, C FAIRNESS ACCOUNTAB, V81, P77, DOI DOI 10.2147/OTT.S126905
  • [2] Dinh L, 2017, Arxiv, DOI [arXiv:1605.08803, DOI 10.48550/ARXIV.1605.08803]
  • [3] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
  • [4] Ho JAT, 2020, Arxiv, DOI [arXiv:2006.11239, DOI 10.48550/ARXIV.2006.11239]
  • [5] Kapoor A., 2023, IEEE Access, V11, P12356, DOI [10.1109/ACCESS.2023.3241236, DOI 10.1109/ACCESS.2023.3241236]
  • [6] Kumar R. S., 2023, IEEE Access, V11, P23112, DOI [10.1109/ACCESS.2023.3274928, DOI 10.1109/ACCESS.2023.3274928]
  • [7] Deep learning
    LeCun, Yann
    Bengio, Yoshua
    Hinton, Geoffrey
    [J]. NATURE, 2015, 521 (7553) : 436 - 444
  • [8] Kingma DP, 2014, Arxiv, DOI arXiv:1312.6114
  • [9] Patel K., 2023, IEEE Trans. Multimedia, V25, P1582, DOI [10.1109/TMM.2023.3288110, DOI 10.1109/TMM.2023.3288110]
  • [10] "Why Should I Trust You?" Explaining the Predictions of Any Classifier
    Ribeiro, Marco Tulio
    Singh, Sameer
    Guestrin, Carlos
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 1135 - 1144