AI-based Question Answering Assistance for Analyzing Natural-language Requirements

被引:4
|
作者
Ezzini, Saad [1 ]
Abualhaija, Sallam [1 ]
Arora, Chetan [2 ,3 ]
Sabetzadeh, Mehrdad [4 ]
机构
[1] Univ Luxembourg, SnT Ctr Secur Reliabil & Trust, Luxembourg, Luxembourg
[2] Deakin Univ, Geelong, Australia
[3] Monash Univ, Victoria, Australia
[4] Univ Ottawa, Sch Elect Engn & Comp Sci, Ottawa, ON, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Natural-language Requirements; Question Answering (QA); Language Models; Natural Language Processing (NLP); Natural Language Generation (NLG); BERT; T5; TIQI;
D O I
10.1109/ICSE48619.2023.00113
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
By virtue of being prevalently written in natural language (NL), requirements are prone to various defects, e.g., inconsistency and incompleteness. As such, requirements are frequently subject to quality assurance processes. These processes, when carried out entirely manually, are tedious and may further overlook important quality issues due to time and budget pressures. In this paper, we propose QAssist - a question-answering (QA) approach that provides automated assistance to stakeholders, including requirements engineers, during the analysis of NL requirements. Posing a question and getting an instant answer is beneficial in various quality-assurance scenarios, e.g., incompleteness detection. Answering requirements-related questions automatically is challenging since the scope of the search for answers can go beyond the given requirements specification. To that end, QAssist provides support for mining external domain-knowledge resources. Our work is one of the first initiatives to bring together QA and external domain knowledge for addressing requirements engineering challenges. We evaluate QAssist on a dataset covering three application domains and containing a total of 387 question-answer pairs. We experiment with state-of-the-art QA methods, based primarily on recent large-scale language models. In our empirical study, QAssist localizes the answer to a question to three passages within the requirements specification and within the external domain-knowledge resource with an average recall of 90.1% and 96.5%, respectively. QAssist extracts the actual answer to the posed question with an average accuracy of 84.2%.
引用
收藏
页码:1277 / 1289
页数:13
相关论文
共 50 条