Next-Generation Environments for Assessing and Promoting Complex Science Learning

被引:26
作者
Quellmalz, Edys S. [1 ]
Davenport, Jodi L. [1 ]
Timms, Michael J. [2 ]
DeBoer, George E. [3 ]
Jordan, Kevin A. [1 ]
Livang, Chun-Wei [1 ]
Buckley, Barbara C. [1 ]
机构
[1] WestEd, Redwood City, CA 94063 USA
[2] Australian Council Educ Res, Melbourne, Vic, Australia
[3] Amer Assoc Advancement Sci, Washington, DC USA
关键词
educational assessment; science education; multimedia; psychometrics; technology enhanced assessment; INQUIRY-BASED SCIENCE; SIMULATIONS; VALIDATION; STANDARDS;
D O I
10.1037/a0032220
中图分类号
G44 [教育心理学];
学科分类号
0402 ; 040202 ;
摘要
How can assessments measure complex science learning? Although traditional, multiple-choice items can effectively measure declarative knowledge such as scientific facts or definitions, they are considered less well suited for providing evidence of science inquiry practices such as making observations or designing and conducting investigations. Thus, students who perform very proficiently in "science" as measured by static, conventional tests may have strong factual knowledge but little ability to apply this knowledge to conduct meaningful investigations. As technology has advanced, interactive, simulation-based assessments have the promise of capturing information about these more complex science practice skills. In the current study, we test whether interactive assessments may be more effective than traditional, static assessments at discriminating student proficiency across 3 types of science practices: (a) identifying principles (e.g., recognizing principles), (b) using principles (e.g., applying knowledge to make predictions and generate explanations), and (c) conducting inquiry (e.g., designing experiments). We explore 3 modalities of assessment: static, most similar to traditional items in which the system presents still images and does not respond to student actions, active, in which the system presents dynamic portrayals, such as animations, which students can observe and review, and interactive, in which the system depicts dynamic phenomena and responds to student actions. We use 3 analyses-a generalizability study, confirmatory factor analysis, and multidimensional item response theory-to evaluate how well each assessment modality can distinguish performance on these 3 types of science practices. The comparison of performance on static, active, and interactive items found that interactive assessments might be more effective than static assessments at discriminating student proficiencies for conducting inquiry.
引用
收藏
页码:1100 / 1114
页数:15
相关论文
共 72 条
[1]  
Adams Wendy K., 2008, Journal of Interactive Learning Research, V19, P397
[2]  
Ainsworth S, 2008, MODEL MODEL SCI EDUC, V3, P191
[3]  
[Anonymous], 1993, BENCHM SCI LIT
[4]  
[Anonymous], 2018, Knowing, Learning, and Instruction: Essays in Honor of Robert Glaser, DOI DOI 10.4324/9781315044408-14
[5]  
Betrancourt M., 2005, Handbook of Multimedia, P287, DOI [DOI 10.1017/CB09780511816819.019, DOI 10.1017/CBO9780511816819.019]
[6]  
Bransford J. D., 2000, How People Learn: Brain, Mind, Experience, and School
[7]  
Brennan R.L., 2001, mGENOVA
[8]  
Buckley B.C., 2013, Models and Modeling in Science Education, V7, P247
[9]   Looking inside the black box: assessing model-based learning and inquiry in BioLogica (TM) [J].
Buckley, Barbara C. ;
Gobert, Janice D. ;
Horwitz, Paul ;
O'Dwyer, Laura M. .
INTERNATIONAL JOURNAL OF LEARNING TECHNOLOGY, 2010, 5 (02) :166-190
[10]   CONVERGENT AND DISCRIMINANT VALIDATION BY THE MULTITRAIT-MULTIMETHOD MATRIX [J].
CAMPBELL, DT ;
FISKE, DW .
PSYCHOLOGICAL BULLETIN, 1959, 56 (02) :81-105