Redefining crowdsourced test report prioritization: An innovative approach with large language model

被引:0
|
作者
Ling, Yuchen [1 ]
Yu, Shengcheng [1 ]
Fang, Chunrong [1 ]
Pan, Guobin [2 ]
Wang, Jun [2 ]
Liu, Jia [1 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] China Mobile Suzhou Software Technol Co Ltd, Suzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Crowdsourced testing; Mobile app testing; Test report prioritization; Large language model; DUPLICATE;
D O I
10.1016/j.infsof.2024.107629
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Context: Crowdsourced testing has gained popularity in software testing, especially for mobile app testing, due to its ability to bring diversity and tackle fragmentation issues. However, the openness of crowdsourced testing presents challenges, particularly in the manual review of numerous test reports, which is time-consuming and labor-intensive. Objective: The primary goal of this research is to improve the efficiency of review processes in crowdsourced testing. Traditional approaches to test report prioritization lack a deep understanding of semantic information in textual descriptions of these reports. This paper introduces LLMPrior, a novel approach for prioritizing crowdsourced test reports using large language models (LLMs). Method: LLMPrior leverages LLMs for the analysis and clustering of crowdsourced test reports based on the types of bugs revealed in their textual descriptions. This involves using prompt engineering techniques to enhance the performance of LLMs. Following the clustering, a recurrent selection algorithm is applied to prioritize the reports. Results: Empirical experiments are conducted to evaluate the effectiveness of LLMPrior. The findings indicate that LLMPrior not only surpasses current state-of-the-art approaches in terms of performance but also proves to be more feasible, efficient, and reliable. This success is attributed to the use of prompt engineering techniques and the cluster-based prioritization strategy. Conclusion: LLMPrior represents a significant advancement in crowdsourced test report prioritization. By effectively utilizing large language models and a cluster-based strategy, it addresses the challenges in traditional prioritization approaches, offering a more efficient and reliable solution for app developers dealing with crowdsourced test reports.
引用
收藏
页数:13
相关论文
共 39 条
  • [1] Crowdsourced Test Report Prioritization Based on Text Classification
    Yang, Yuxuan
    Chen, Xin
    IEEE ACCESS, 2022, 10 : 92692 - 92705
  • [2] Test Report Prioritization to Assist Crowdsourced Testing
    Feng, Yang
    Chen, Zhenyu
    Jones, James A.
    Fang, Chunrong
    Xu, Baowen
    2015 10TH JOINT MEETING OF THE EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND THE ACM SIGSOFT SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE 2015) PROCEEDINGS, 2015, : 225 - 236
  • [3] Enhanced Crowdsourced Test Report Prioritization via Image-and-Text Semantic Understanding and Feature Integration
    Fang, Chunrong
    Yu, Shengcheng
    Zhang, Quanjun
    Li, Xin
    Liu, Yulei
    Chen, Zhenyu
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2025, 51 (01) : 283 - 304
  • [4] Crowdsourced test report prioritization considering bug severity
    Tong, Yao
    Zhang, Xiaofang
    INFORMATION AND SOFTWARE TECHNOLOGY, 2021, 139 (139)
  • [5] Advanced Crowdsourced Test Report Prioritization Based on Adaptive Strategy
    Zhu, Penghua
    Li, Ying
    Li, Tongyu
    Ren, Huimin
    Sun, Xiaolei
    IEEE ACCESS, 2022, 10 (53522-53532) : 53522 - 53532
  • [6] An Effective Crowdsourced Test Report Clustering Model Based on Sentence Embedding
    Chen, Hao
    Huang, Song
    Liu, Yuchan
    Luo, Run
    Xie, Yifei
    2021 IEEE 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2021), 2021, : 888 - 899
  • [7] Research on Psychological Test based on Large Language Model
    Liu, Zhengzheng
    Li, Xinying
    Kang, Yunfeng
    2024 3RD INTERNATIONAL CONFERENCE ON ROBOTICS, ARTIFICIAL INTELLIGENCE AND INTELLIGENT CONTROL, RAIIC 2024, 2024, : 503 - 510
  • [8] Engaging Preference Optimization Alignment in Large Language Model for Continual Radiology Report Generation: A Hybrid Approach
    Izhar, Amaan
    Idris, Norisma
    Japar, Nurul
    COGNITIVE COMPUTATION, 2025, 17 (01)
  • [9] Adversarial Text Purification: A Large Language Model Approach for Defense
    Moraffah, Raha
    Khandelwal, Shubh
    Bhattacharjee, Amrita
    Liu, Huan
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT V, PAKDD 2024, 2024, 14649 : 65 - 77
  • [10] A Large Language Model Approach to Educational Survey Feedback Analysis
    Parker, Michael J.
    Anderson, Caitlin
    Stone, Claire
    Oh, Yearim
    INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2024,