BACK TO BASICS - PERCENTAGE AGREEMENT MEASURES ARE ADEQUATE, BUT THERE ARE EASIER WAYS

被引:60
作者
BIRKIMER, JC
BROWN, JH
机构
[1] University of Louisville, Louisville, Kentucky
关键词
chance agreement; chance reliability; interobserver agreement; observational data; observational technology; percentage agreement; reliability;
D O I
10.1901/jaba.1979.12-535
中图分类号
B849 [应用心理学];
学科分类号
040203 ;
摘要
Percentage agreement measures of interobserver agreement or “reliability” have traditionally been used to summarize observer agreement from studies using interval recording, time‐sampling, and trial‐scoring data collection procedures. Recent articles disagree on whether to continue using these percentage agreement measures, and on which ones to use, and what to do about chance agreements if their use is continued. Much of the disagreement derives from the need to be reasonably certain we do not accept as evidence of true interobserver agreement those agreement levels which are substantially probable as a result of chance observer agreement. The various percentage agreement measures are shown to be adequate to this task, but easier ways are discussed. Tables are given to permit checking to see if obtained disagreements are unlikely due to chance. Particularly important is the discovery of a simple rule that, when met, makes the tables unnecessary. If reliability checks using 50 or more observation occasions produce 10% or fewer disagreements, for behavior rates from 10% through 90%, the agreement achieved is quite improbably the result of chance agreement. 1979 Society for the Experimental Analysis of Behavior
引用
收藏
页码:535 / 543
页数:9
相关论文
共 10 条
[1]   REVIEWERS COMMENT - JUST BECAUSE ITS RELIABLE DOESNT MEAN THAT YOU CAN USE IT [J].
BAER, DM .
JOURNAL OF APPLIED BEHAVIOR ANALYSIS, 1977, 10 (01) :117-119
[2]   PERHAPS IT WOULD BE BETTER NOT TO KNOW EVERYTHING [J].
BAER, DM .
JOURNAL OF APPLIED BEHAVIOR ANALYSIS, 1977, 10 (01) :167-172
[3]   GRAPHICAL JUDGMENTAL AID WHICH SUMMARIZES OBTAINED AND CHANCE RELIABILITY DATA AND HELPS ASSESS THE BELIEVABILITY OF EXPERIMENTAL EFFECTS [J].
BIRKIMER, JC ;
BROWN, JH .
JOURNAL OF APPLIED BEHAVIOR ANALYSIS, 1979, 12 (04) :523-533
[4]   CONSIDERATIONS IN CHOICE OF INTEROBSERVER RELIABILITY ESTIMATES [J].
HARTMANN, DP .
JOURNAL OF APPLIED BEHAVIOR ANALYSIS, 1977, 10 (01) :103-116
[5]  
HAWKINS RP, 1975, BEHAVIOR ANAL AREAS
[6]   EVALUATING INTEROBSERVER RELIABILITY OF INTERVAL DATA [J].
HOPKINS, BL ;
HERMANN, JA .
JOURNAL OF APPLIED BEHAVIOR ANALYSIS, 1977, 10 (01) :121-126
[7]   REVIEW OF OBSERVATIONAL DATA-COLLECTION AND RELIABILITY PROCEDURES REPORTED IN JOURNAL-OF-APPLIED-BEHAVIOR-ANALYSIS [J].
KELLY, MB .
JOURNAL OF APPLIED BEHAVIOR ANALYSIS, 1977, 10 (01) :97-101
[8]   OBSERVER AGREEMENT, CREDIBILITY, AND JUDGMENT - SOME CONSIDERATIONS IN PRESENTING OBSERVER AGREEMENT DATA [J].
KRATOCHWILL, TR ;
WETZEL, RJ .
JOURNAL OF APPLIED BEHAVIOR ANALYSIS, 1977, 10 (01) :133-139
[9]  
Siegel S., 1956, NONPARAMETRIC STAT B
[10]   PROBABILITY-BASED FORMULA FOR CALCULATING INTEROBSERVER AGREEMENT [J].
YELTON, AR ;
WILDMAN, BG ;
ERICKSON, MT .
JOURNAL OF APPLIED BEHAVIOR ANALYSIS, 1977, 10 (01) :127-131