Provenance-Based Trust-Aware Requirements Engineering Framework for Self-Adaptive Systems

被引:1
作者
Lee, Hyo-Cheol [1 ]
Lee, Seok-Won [1 ,2 ]
机构
[1] Ajou Univ, Dept Comp Engn, Suwon 16499, South Korea
[2] Ajou Univ, Dept Artificial Intelligence, Suwon 16499, South Korea
基金
新加坡国家研究基金会;
关键词
requirements engineering; goal modeling; trust; provenance; self-adaptive system; MANAGEMENT; TAXONOMY; SECURE;
D O I
10.3390/s23104622
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
With the development of artificial intelligence technology, systems that can actively adapt to their surroundings and cooperate with other systems have become increasingly important. One of the most important factors to consider during the process of cooperation among systems is trust. Trust is a social concept that assumes that cooperation with an object will produce positive results in the direction we intend. Our objectives are to propose a method for defining trust during the requirements engineering phase in the process of developing self-adaptive systems and to define the trust evidence models required to evaluate the defined trust at runtime. To achieve this objective, we propose in this study a provenance-based trust-aware requirement engineering framework for self-adaptive systems. The framework helps system engineers derive the user's requirements as a trust-aware goal model through analysis of the trust concept in the requirements engineering process. We also propose a provenance-based trust evidence model to evaluate trust and provide a method for defining this model for the target domain. Through the proposed framework, a system engineer can treat trust as a factor emerging from the requirements engineering phase for the self-adaptive system and understand the factors affecting trust using the standardized format.
引用
收藏
页数:34
相关论文
共 49 条
  • [1] Data provenance for cloud forensic investigations, security, challenges, solutions and future perspectives: A survey
    Abiodun, Oludare Isaac
    Alawida, Moatsum
    Omolara, Abiodun Esther
    Alabdulatif, Abdulatif
    [J]. JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (10) : 10217 - 10245
  • [2] Trust Management of Smart Service Communities
    Al-Hamadi, Hamid
    Chen, Ing-Ray
    Cho, Jin-Hee
    [J]. IEEE ACCESS, 2019, 7 : 26362 - 26378
  • [3] Al-Yaseen D.A., 2012, THESIS QUEENS U KING
  • [4] Ali N., 2017, P S APPL COMP MARR M
  • [5] Provenance for Collaboration: Detecting Suspicious Behaviors and Assessing Trust in Information
    Allen, M. David
    Chapman, Adriane
    Seligman, Len
    Blaustein, Barbara
    [J]. PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON COLLABORATIVE COMPUTING: NETWORKING, APPLICATIONS AND WORKSHARING (COLLABORATECOM), 2011, : 342 - 351
  • [6] Ontology-Based Modeling and Analysis of Trustworthiness Requirements: Preliminary Results
    Amaral, Glenda
    Guizzardi, Renata
    Guizzardi, Giancarlo
    Mylopoulos, John
    [J]. CONCEPTUAL MODELING, ER 2020, 2020, 12400 : 342 - 352
  • [7] Resilient and dependability management in distributed environments: a systematic and comprehensive literature review
    Amiri, Zahra
    Heidari, Arash
    Navimipour, Nima Jafari
    Unal, Mehmet
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2023, 26 (02): : 1565 - 1600
  • [8] STORE: Security Threat Oriented Requirements Engineering Methodology
    Ansari, Md Tarique Jamal
    Pandey, Dhirendra
    Alenezi, Mamdouh
    [J]. JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (02) : 191 - 203
  • [9] Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption
    Bedue, Patrick
    Fritzsche, Albrecht
    [J]. JOURNAL OF ENTERPRISE INFORMATION MANAGEMENT, 2022, 35 (02) : 530 - 549
  • [10] Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems
    Ben Shneiderman
    [J]. ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2020, 10 (04)