A feature location approach for mapping application features extracted from crowd-based screencasts to source code

被引:0
作者
Parisa Moslehi
Bram Adams
Juergen Rilling
机构
[1] Concordia Universitys,
[2] Queen’s University,undefined
来源
Empirical Software Engineering | 2020年 / 25卷
关键词
Crowd-based documentation; Mining video content; Speech analysis; Feature location; Software traceability; Information extraction; Software documentation;
D O I
暂无
中图分类号
学科分类号
摘要
Crowd-based multimedia documents such as screencasts have emerged as a source for documenting requirements, the workflow and implementation issues of open source and agile software projects. For example, users can show and narrate how they manipulate an application’s GUI to perform a certain functionality, or a bug reporter could visually explain how to trigger a bug or a security vulnerability. Unfortunately, the streaming nature of programming screencasts and their binary format limit how developers can interact with a screencast’s content. In this research, we present an automated approach for mining and linking the multimedia content found in screencasts to their relevant software artifacts and, more specifically, to source code. We apply LDA-based mining approaches that take as input a set of screencast artifacts, such as GUI text and spoken word, to make the screencast content accessible and searchable to users and to link it to their relevant source code artifacts. To evaluate the applicability of our approach, we report on results from case studies that we conducted on existing WordPress and Mozilla Firefox screencasts. We found that our automated approach can significantly speed up the feature location process. For WordPress, we find that our approach using screencast speech and GUI text can successfully link relevant source code files within the top 10 hits of the result set with median Reciprocal Rank (RR) of 50% (rank 2) and 100% (rank 1). In the case of Firefox, our approach can identify relevant source code directories within the top 100 hits using screencast speech and GUI text with the median RR = 20%, meaning that the first true positive is ranked 5 or higher in more than 50% of the cases. Also, source code related to the frontend implementation that handles high-level or GUI-related aspects of an application is located with higher accuracy. We also found that term frequency rebalancing can further improve the linking results when using less noisy scenarios or locating less technical implementation of scenarios. Investigating the results of using original and weighted screencast data sources (speech, GUI, speech and GUI) that can result in having the highest median RR values in both case studies shows that speech data is an important information source that can result in having RR of 100%.
引用
收藏
页码:4873 / 4926
页数:53
相关论文
共 21 条
  • [1] Bao L(2017)Extracting and analyzing time-series HCI data from screen-captured task videos Empir Softw Eng 22 134-174
  • [2] Bao L(2019)VT-revolution: interactive programming video tutorial authoring and watching system IEEE Trans Softw Eng 45 823-838
  • [3] Baroni M(2009)The WaCky wide web: a collection of very large linguistically processed web-crawled corpora Lang Resour Eval 43 209-226
  • [4] Brunelli R(1993)Face recognition: features versus templates IEEE Trans Pattern Anal Mach Intell 15 1042-1052
  • [5] Poggio T(2016)A survey on the use of topic models when mining software repositories Empir Softw Eng 21 1843-1919
  • [6] Chen T-H(2014)BTM: topic modeling over short texts IEEE Trans Knowl Data Eng 26 2928-2941
  • [7] Thomas SW(1990)Indexing by latent semantic analysis J Am Soc Inf Sci 41 391-407
  • [8] Hassan AE(2013)Feature location in source code: a taxonomy and survey J Softw Evol Proc 25 53-95
  • [9] Cheng X(2018)Impact of structural weighting on a latent Dirichlet allocation–based feature location technique J Softw Evol Proc 30 e1892-1507
  • [10] Deerwester S(2017)Documenting and sharing software knowledge using screencasts Empir Softw Eng 22 1478-488