STRE: An Automated Approach to Suggesting App Developers When to Stop Reading Reviews

被引:2
作者
Tan, Youshuai [1 ]
Chen, Jinfu [2 ]
Shang, Weiyi [3 ]
Zhang, Tao [1 ]
Fang, Sen [1 ]
Luo, Xiapu
Chen, Zijie [1 ]
Qi, Shuhao [4 ]
机构
[1] Macau Univ Sci & Technol, Sch Comp Sci & Engn, Taipa 999078, Macau, Peoples R China
[2] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Hubei, Peoples R China
[3] Concordia Univ, Dept Comp Sci & Software Engn, Montreal, PQ H3G 1M8, Canada
[4] Univ Manchester, Dept Comp Sci, Manchester M13 9PL, England
基金
中国博士后科学基金;
关键词
Computer bugs; Maintenance engineering; Data models; Mobile applications; Libraries; Data mining; Computer science; App reviews; maintenance; mobile applications; SUPPORT;
D O I
10.1109/TSE.2023.3285743
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
It is well known that user feedback (i.e., reviews) plays an essential role in mobile app maintenance. Users upload their troubles, app issues, or praises, to help developers refine their apps. However, reading tremendous amounts of reviews to retrieve useful information is a challenging job. According to our manual studies, reviews are full of repetitive opinions, thus developers could stop reading reviews when no more new helpful information appears. Developers can extract useful information from partial reviews to ameliorate their app and then develop a new version. However, it is tough to have a good trade-off between getting enough useful feedback and saving more time. In this paper, we propose a novel approach, named STRE, which utilizes historical reviews to suggest the time when most of the useful information appears in reviews of a certain version. We evaluate STRE on 62 recent versions of five apps from Apple's App Store. Study results demonstrate that our approach can help developers save their time by up to 98.33% and reserve enough useful reviews before stopping to read reviews such that developers do not spend additional time in reading redundant reviews over the suggested stopping time. At the same time, STRE can complement existing review categorization approaches that categorize reviews to further assist developers. In addition, we find that the missed top-word-related reviews appearing after the suggested stopping time contain limited useful information for developers. Finally, we find that 12 out of 13 of the emerging bugs from the studied versions appear before the suggested stopping time. Our approach demonstrates the value of automatically refining information from reviews.
引用
收藏
页码:4135 / 4151
页数:17
相关论文
共 56 条
[1]  
Achananuparp P, 2008, LECT NOTES COMPUT SC, V5182, P305, DOI 10.1007/978-3-540-85836-2_29
[2]   App Store Effects on Software Engineering Practices [J].
Al-Subaihin, Afnan A. ;
Sarro, Federica ;
Black, Sue ;
Capra, Licia ;
Harman, Mark .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2021, 47 (02) :300-319
[3]   FeatCompare: Feature comparison for competing mobile apps leveraging user reviews [J].
Assi, Maram ;
Hassan, Safwat ;
Tian, Yuan ;
Zou, Ying .
EMPIRICAL SOFTWARE ENGINEERING, 2021, 26 (05)
[4]  
Asuncion HU, 2010, P 32 ACMIEEE INT C S, V1, P95, DOI [10.1145/1806799.1806817, DOI 10.1145/1806799.1806817]
[5]  
Baldwin T., 2010, HUMAN LANGUAGE TECHN, P100
[6]   Duplicate Bug Reports Considered Harmful ... Really? [J].
Bettenburg, Nicolas ;
Premraj, Rahul ;
Zimmermann, Thomas ;
Kim, Sunghun .
2008 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE, 2008, :337-345
[7]  
Binkley D., 2014, 22nd International Conference on Program Comprehension (Hyderabad, India, 2014), P26, DOI DOI 10.1145/2597008.2597150
[8]  
Bird Christian, 2015, The art and science of analyzing software data
[9]  
Bird S., 2009, Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit, DOI DOI 10.5555/1717171
[10]   Probabilistic Topic Models [J].
Blei, David M. .
COMMUNICATIONS OF THE ACM, 2012, 55 (04) :77-84