A Validated Scoring Rubric for Explain-in-Plain-English Questions

被引:18
作者
Chen, Binglin [1 ]
Azad, Sushmita [1 ]
Haldar, Rajarshi [1 ]
West, Matthew [1 ]
Zilles, Craig [1 ]
机构
[1] Univ Illinois, Urbana, IL 61801 USA
来源
SIGCSE 2020: PROCEEDINGS OF THE 51ST ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION | 2020年
关键词
code reading; CS1; experience report; reliability; validity; INSTRUCTION;
D O I
10.1145/3328778.3366879
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Previous research has identified the ability to read code and understand its high-level purpose as an important developmental skill that is harder to do (for a given piece of code) than executing code in one's head for a given input ("code tracing"), but easier to do than writing the code. Prior work involving code reading ("Explain in plain English") problems, have used a scoring rubric inspired by the SOLO taxonomy, but we found it difficult to employ because it didn't adequately handle the three dimensions of answer quality: correctness, level of abstraction, and ambiguity. In this paper, we describe a 7-point rubric that we developed for scoring student responses to "Explain in plain English" questions, and we validate this rubric through four means. First, we find that the scale can be reliably applied with with a median Krippendorff's alpha (inter-rater reliability) of 0.775. Second, we report on an experiment to assess the validity of our scale. Third, we find that a survey consisting of 12 code reading questions had a high internal consistency (Cronbach's alpha = 0.954). Last, we find that our scores for code reading questions in a large enrollment (N = 452) data structures course are correlated (Pearson's R = 0.555) to code writing performance to a similar degree as found in previous work.
引用
收藏
页码:563 / 569
页数:7
相关论文
共 31 条
[1]  
[Anonymous], 2008, P 4 INT WORKSH COMP, DOI DOI 10.1145/1404520.1404531
[2]  
[Anonymous], 2008, P 4 INT WORKSH COMP, DOI DOI 10.1145/1404520.1404532
[3]  
Astrachan O., 1995, SIGCSE Bulletin, V27, P1, DOI 10.1145/199691.199694
[4]  
Biggs J. B., 1982, Evaluating the quality of learning-The SOLO taxonomy
[5]  
Clancy MJ, 1999, PROCEEDINGS OF THE THIRTIETH SIGCSE TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, P37, DOI 10.1145/384266.299673
[6]  
Corney M., 2011, Proceedings of the Thirteenth Australasian Computing Education Conference, V114, P95
[7]   'Explain in Plain English' Questions Revisited: Data Structures Problems [J].
Corney, Malcolm ;
Fitzgerald, Sue ;
Hanks, Brian ;
Lister, Raymond ;
McCauley, Renee ;
Murphy, Laurie .
PROCEEDINGS OF THE 45TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION (SIGCSE'14), 2014, :591-596
[8]  
Lewis J. R., 1993, International Journal of Human-Computer Interaction, V5, P383, DOI 10.1080/10447319309526075
[9]   Cognitive Consequences of Programming Instruction: Instruction, Access, and Ability [J].
Linn, Marcia C. ;
Dalbey, John .
EDUCATIONAL PSYCHOLOGIST, 1985, 20 (04) :191-206
[10]  
Lister R., 2004, SIGCSE Bulletin, V36, P119, DOI 10.1145/1041624.1041673