Anchoring Code Understandability Evaluations Through Task Descriptions

被引:0
作者
Wyrich, Marvin [1 ]
Merz, Lasse [1 ]
Graziotin, Daniel [1 ]
机构
[1] Univ Stuttgart, Stuttgart, Germany
来源
30TH IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2022) | 2022年
关键词
code comprehension; anchoring effect; empirical study design; software metrics; PLS-SEM;
D O I
10.1145/3524610.3527904
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In code comprehension experiments, participants are usually told at the beginning what kind of code comprehension task to expect. Describing experiment scenarios and experimental tasks will influence participants in ways that are sometimes hard to predict and control. In particular, describing or even mentioning the difficulty of a code comprehension task might anchor participants and their perception of the task itself. In this study, we investigated in a randomized, controlled experiment with 256 participants (50 software professionals and 206 computer science students) whether a hint about the difficulty of the code to be understood in a task description anchors participants in their own code comprehensibility ratings. Subjective code evaluations are a commonly used measure for how well a developer in a code comprehension study understood code. Accordingly, it is important to understand how robust these measures are to cognitive biases such as the anchoring effect. Our results show that participants are significantly influenced by the initial scenario description in their assessment of code comprehensibility. An initial hint of hard to understand code leads participants to assess the code as harder to understand than participants who received no hint or a hint of easy to understand code. This affects students and professionals alike. We discuss examples of design decisions and contextual factors in the conduct of code comprehension experiments that can induce an anchoring effect, and recommend the use of more robust comprehension measures in code comprehension studies to enhance the validity of results.
引用
收藏
页码:133 / 140
页数:8
相关论文
共 33 条
[1]  
Allen Gove., 2006, 2006 INT C INFORM SY, P1
[2]  
American Educational Research Association American Psychological Association National Council on Measurement in Education, 2014, Standards for Educational and Psychological Testing
[3]  
Baron Marvin Munoz, 2020, P 14 ACM IEEE INT S, DOI DOI 10.1145/3382494.3410636
[4]   Fitting Linear Mixed-Effects Models Using lme4 [J].
Bates, Douglas ;
Maechler, Martin ;
Bolker, Benjamin M. ;
Walker, Steven C. .
JOURNAL OF STATISTICAL SOFTWARE, 2015, 67 (01) :1-48
[5]   Cognitive Complexity An Overview and Evaluation [J].
Campbell, G. Ann .
2018 IEEE/ACM INTERNATIONAL CONFERENCE ON TECHNICAL DEBT (TECHDEBT), 2018, :57-58
[6]   A Tale from the Trenches: Cognitive Biases and Software Development [J].
Chattopadhyay, Souti ;
Nelson, Nicholas ;
Au, Audrey ;
Morales, Natalia ;
Sanchez, Christopher ;
Pandita, Rahul ;
Sarma, Anita .
2020 ACM/IEEE 42ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2020), 2020, :654-665
[7]  
Clark Michael., 2016, SEM MIXED
[8]  
Cohen J., 2003, APPL MULTIPLE REGRES, V3rd
[9]   Incidental environmental anchors [J].
Critcher, Clayton R. ;
Gilovich, Thomas .
JOURNAL OF BEHAVIORAL DECISION MAKING, 2008, 21 (03) :241-251
[10]   Moving towards Objective Measures of Program Comprehension [J].
Fakhoury, Sarah .
ESEC/FSE'18: PROCEEDINGS OF THE 2018 26TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2018, :936-939