Reliability of assessment of dogs' behavioural responses by staff working at a welfare charity in the UK

被引:40
作者
Diesel, Gillian [1 ]
Brodbelt, David [1 ]
Pfeiffer, Dirk U. [1 ]
机构
[1] Univ London Royal Vet Coll, N Mymms AL9 7TA, Herts, England
关键词
Dog; Behavioural assessments; Reliability; Weighted kappa;
D O I
10.1016/j.applanim.2008.05.005
中图分类号
S8 [畜牧、 动物医学、狩猎、蚕、蜂];
学科分类号
0905 ;
摘要
Behavioural problems in dogs are it major cause for their relinquishment and return to animal shelters and rehoming centres. It is important that the staff in these welfare centres can reliably assess the behavioural characteristics of the dogs so that they can be rehomed to the most appropriate environment and the owners can be made aware of what to expect. There have been very few studies which have assessed the reliability of the assessment of dogs' behavioural responses; therefore this study was conducted to assess the inter-rater reproducibility and intra-rater repeatability of the assessment of behavioural responses of dogs by staff at a UK dog welfare charity. Forty members of staff at Dogs Trust, a UK dog welfare charity, were asked to assess 20 dogs based on the dogs' behaviour recorded on video. They were then asked to repeat these assessments 2 months after they had completed the first assessments using the same video. The data were then analysed for inter-rater reproducibility and intra-rater repeatability using the weighted kappa statistic. It wits found that there was a moderate level of agreement between staff in their assessment of the dogs' behavioural responses to a person 'approaching their kennel' and a poor to moderate level of agreement for behavioural responses to 'general handling and grooming'. Additionally, there was only a low-level of agreement for many of the responses to a dog 'meeting another dog' and this may have been due to the small number of dogs that were able to be assessed in this situation. The inter-rater reproducibility was greater when the analysis was restricted to those members of staff who had formal training or more than 8 years of experience. The intra-rater repeatability was generally moderate to high in all cases indicating it good consistency by members of staff over time. The current study shows that there is an overall moderate level of agreement amongst staff but as they gain experience and training their level of agreement tends to increase. This study also highlighted the need for the standardisation of the assessments currently used. It is important that staff are able to consistently identify behavioural characteristics of dogs in order to provide the best chance of a successful adoption. (C) 2008 Elsevier B.V. All rights reserved.
引用
收藏
页码:171 / 181
页数:11
相关论文
共 29 条
  • [1] [Anonymous], P 1 INT C VET BEH ME
  • [2] BAILEY GP, 1992, J SOC COMPANION ANIM, V4, P5
  • [3] Beyond kappa: A review of interrater agreement measures
    Banerjee, M
    [J]. CANADIAN JOURNAL OF STATISTICS-REVUE CANADIENNE DE STATISTIQUE, 1999, 27 (01): : 3 - 23
  • [4] STATISTICAL-METHODS FOR ASSESSING OBSERVER VARIABILITY IN CLINICAL MEASURES
    BRENNAN, P
    SILMAN, A
    [J]. BMJ-BRITISH MEDICAL JOURNAL, 1992, 304 (6840): : 1491 - 1494
  • [5] Dependence of weighted kappa coefficients on the number of categories
    Brenner, H
    Kliebsch, U
    [J]. EPIDEMIOLOGY, 1996, 7 (02) : 199 - 202
  • [6] Brenninkmeyer C, 2007, ANIM WELFARE, V16, P127
  • [7] Estimates of genetic parameters for hunting performance traits in three breeds of gun hunting dogs in Norway
    Brenoe, UT
    Larsgard, AG
    Johannessen, KR
    Uldal, SH
    [J]. APPLIED ANIMAL BEHAVIOUR SCIENCE, 2002, 77 (03) : 209 - 215
  • [8] BIAS, PREVALENCE AND KAPPA
    BYRT, T
    BISHOP, J
    CARLIN, JB
    [J]. JOURNAL OF CLINICAL EPIDEMIOLOGY, 1993, 46 (05) : 423 - 429
  • [9] Carlin JB, 2000, TERATOLOGY, V62, P406, DOI 10.1002/1096-9926(200012)62:6<406::AID-TERA7>3.3.CO
  • [10] 2-5