Redefining crowdsourced test report prioritization: An innovative approach with large language model

被引:0
|
作者
Ling, Yuchen [1 ]
Yu, Shengcheng [1 ]
Fang, Chunrong [1 ]
Pan, Guobin [2 ]
Wang, Jun [2 ]
Liu, Jia [1 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] China Mobile Suzhou Software Technol Co Ltd, Suzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Crowdsourced testing; Mobile app testing; Test report prioritization; Large language model; DUPLICATE;
D O I
10.1016/j.infsof.2024.107629
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Context: Crowdsourced testing has gained popularity in software testing, especially for mobile app testing, due to its ability to bring diversity and tackle fragmentation issues. However, the openness of crowdsourced testing presents challenges, particularly in the manual review of numerous test reports, which is time-consuming and labor-intensive. Objective: The primary goal of this research is to improve the efficiency of review processes in crowdsourced testing. Traditional approaches to test report prioritization lack a deep understanding of semantic information in textual descriptions of these reports. This paper introduces LLMPrior, a novel approach for prioritizing crowdsourced test reports using large language models (LLMs). Method: LLMPrior leverages LLMs for the analysis and clustering of crowdsourced test reports based on the types of bugs revealed in their textual descriptions. This involves using prompt engineering techniques to enhance the performance of LLMs. Following the clustering, a recurrent selection algorithm is applied to prioritize the reports. Results: Empirical experiments are conducted to evaluate the effectiveness of LLMPrior. The findings indicate that LLMPrior not only surpasses current state-of-the-art approaches in terms of performance but also proves to be more feasible, efficient, and reliable. This success is attributed to the use of prompt engineering techniques and the cluster-based prioritization strategy. Conclusion: LLMPrior represents a significant advancement in crowdsourced test report prioritization. By effectively utilizing large language models and a cluster-based strategy, it addresses the challenges in traditional prioritization approaches, offering a more efficient and reliable solution for app developers dealing with crowdsourced test reports.
引用
收藏
页数:13
相关论文
共 39 条
  • [21] An improved transformer-based model for detecting phishing, spam and ham emails: A large language model approach
    Jamal, Suhaima
    Wimmer, Hayden
    Sarker, Iqbal H.
    SECURITY AND PRIVACY, 2024, 7 (05)
  • [22] Evaluating the quality of student-generated content in learnersourcing: A large language model based approach
    Li, Kangkang
    Qian, Chengyang
    Yang, Xianmin
    EDUCATION AND INFORMATION TECHNOLOGIES, 2025, 30 (02) : 2331 - 2360
  • [23] Retrieval-Augmented Generation Approach: Document Question Answering using Large Language Model
    Muludi, Kurnia
    Fitria, Kaira Milani
    Triloka, Joko
    Sutedi
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (03) : 776 - 785
  • [24] Identifying Rare Circumstances Preceding Female Firearm Suicides: Validating A Large Language Model Approach
    Zhou, Weipeng
    Prater, Laura C.
    Goldstein, Evan, V
    Mooney, Stephen J.
    JMIR MENTAL HEALTH, 2023, 10
  • [25] Slit Lamp Report Generation and Question Answering: Development and Validation of a Multimodal Transformer Model with Large Language Model Integration
    Zhao, Ziwei
    Zhang, Weiyi
    Chen, Xiaolan
    Song, Fan
    Gunasegaram, James
    Huang, Wenyong
    Shi, Danli
    He, Mingguang
    Liu, Na
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26
  • [26] Patient-Readable Radiology Report Summaries Generated via Large Language Model: Safety and Quality
    Sterling, Nicholas W.
    Brann, Felix
    Frisch, Stephanie O.
    Schrager, Justin D.
    JOURNAL OF PATIENT EXPERIENCE, 2024, 11
  • [27] Large language model-based approach for human-mobile inspection robot interactive navigation
    Wang T.
    Fan J.
    Zheng P.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2024, 30 (05): : 1587 - 1594
  • [28] Thinking Like an Author: A Zero-Shot Learning Approach to Keyphrase Generation with Large Language Model
    Wang, Siyu
    Dai, Shengran
    Jiang, Jianhui
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT III, ECML PKDD 2024, 2024, 14943 : 335 - 350
  • [29] Feasibility of Extracting Critical Diagnostic Imaging Report Findings Following Percutaneous Liver Ablation with a Large Language Model
    Shieh, Alexander
    Paolucci, Iwan
    Albuquerque, Jessica
    Brock, Kristy
    Odisio, Bruno
    IMAGING INFORMATICS FOR HEALTHCARE, RESEARCH, AND APPLICATIONS, MEDICAL IMAGING 2024, 2024, 12931
  • [30] Anesthetic Management of a Patient With Juvenile Hyaline Fibromatosis: A Case Report Written With the Assistance of the Large Language Model ChatGPT
    Segal, Scott
    Khanna, Ashish K.
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (03)