Inter-rater reliability of kinesthetic measurements with the KINARM robotic exoskeleton

被引:12
作者
Semrau, Jennifer A. [1 ,2 ,5 ]
Herter, Troy M. [3 ]
Scott, Stephen H. [4 ]
Dukelow, Sean P. [1 ,2 ]
机构
[1] Univ Calgary, Hotchkiss Brain Inst, Calgary, AB, Canada
[2] Univ Calgary, Dept Clin Neurosci, Calgary, AB, Canada
[3] Univ South Carolina, Dept Exercise Sci, Columbia, SC 29208 USA
[4] Queens Univ, Dept Anat & Cell Biol, Kingston, ON, Canada
[5] Foothills Med Ctr, South Tower Room 905,1403 29th St NW, Calgary, AB T2N 2T9, Canada
基金
加拿大健康研究院;
关键词
Proprioception; Kinesthesia; Stroke; Robotics; Sensorimotor; Inter-rater reliability; TEST-RETEST RELIABILITY; LIMB POSITION SENSE; MOTOR RECOVERY; STROKE; DEFICITS; IMPAIRMENT; CARE;
D O I
10.1186/s12984-017-0260-z
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Background: Kinesthesia (sense of limb movement) has been extremely difficult to measure objectively, especially in individuals who have survived a stroke. The development of valid and reliable measurements for proprioception is important to developing a better understanding of proprioceptive impairments after stroke and their impact on the ability to perform daily activities. We recently developed a robotic task to evaluate kinesthetic deficits after stroke and found that the majority (similar to 60%) of stroke survivors exhibit significant deficits in kinesthesia within the first 10 days post-stroke. Here we aim to determine the inter-rater reliability of this robotic kinesthetic matching task. Methods: Twenty-five neurologically intact control subjects and 15 individuals with first-time stroke were evaluated on a robotic kinesthetic matching task (KIN). Subjects sat in a robotic exoskeleton with their arms supported against gravity. In the KIN task, the robot moved the subjects' stroke-affected arm at a preset speed, direction and distance. As soon as subjects felt the robot begin to move their affected arm, they matched the robot movement with the unaffected arm. Subjects were tested in two sessions on the KIN task: initial session and then a second session (within an average of 18.2 +/- 13.8 h of the initial session for stroke subjects), which were supervised by different technicians. The task was performed both with and without the use of vision in both sessions. We evaluated intra-class correlations of spatial and temporal parameters derived from the KIN task to determine the reliability of the robotic task. Results: We evaluated 8 spatial and temporal parameters that quantify kinesthetic behavior. We found that the parameters exhibited moderate to high intra-class correlations between the initial and retest conditions (Range, r-value = [0.53-0.97]). Conclusions: The robotic KIN task exhibited good inter-rater reliability. This validates the KIN task as a reliable, objective method for quantifying kinesthesia after stroke.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] The inter-rater reliability of the Turkish version of Aphasia Rapid Test for stroke
    Kavakci, Mariam
    Koyuncu, Engin
    Tanriverdi, Melike
    Adiguzel, Emre
    Yasar, Evren
    TOPICS IN STROKE REHABILITATION, 2022, 29 (04) : 272 - 279
  • [32] Inter-rater reliability, intra-rater reliability and internal consistency of the Brisbane Evidence-Based Language Test
    Rohde, Alexia
    McCracken, Molly
    Worrall, Linda
    Farrell, Anna
    O'Halloran, Robyn
    Godecke, Erin
    David, Michael
    Doi, Suhail A.
    DISABILITY AND REHABILITATION, 2022, 44 (04) : 637 - 645
  • [33] Intra-rater and inter-rater reliability of ultrasonographic measurements of acromion-greater tuberosity distance in patients with post-stroke hemiplegia
    Kumar, Praveen
    Cruziah, Reynold
    Bradley, Michael
    Gray, Selena
    Swinkels, Annette
    TOPICS IN STROKE REHABILITATION, 2016, 23 (03) : 147 - 153
  • [34] Inter-rater reliability of the diagnosis of otitis media based on otoscopic images and wideband tympanometry measurements
    Sundgaard, Josefine Vilsboll
    Varendh, Maria
    Nordstrom, Franziska
    Kamide, Yosuke
    Tanaka, Chiemi
    Harte, James
    Paulsen, Rasmus R.
    Christensen, Anders Nymark
    Bray, Peter
    Laugesen, Soren
    INTERNATIONAL JOURNAL OF PEDIATRIC OTORHINOLARYNGOLOGY, 2022, 153
  • [35] The inter-rater reliability of observing aggression: A systematic literature review
    Lampe, Kore G.
    Mulder, Eva A.
    Colins, Olivier F.
    Vermeiren, Robert R. J. M.
    AGGRESSION AND VIOLENT BEHAVIOR, 2017, 37 : 12 - 25
  • [36] The behavioral and emotional rating scale: test-retest and inter-rater reliability
    Epstein M.H.
    Harniss M.K.
    Pearson N.
    Ryser G.
    Journal of Child and Family Studies, 1999, 8 (3) : 319 - 327
  • [37] Inter-rater reliability and validity of peer reviews in an interdisciplinary field
    Jirschitzka, Jens
    Oeberst, Aileen
    Goellner, Richard
    Cress, Ulrike
    SCIENTOMETRICS, 2017, 113 (02) : 1059 - 1092
  • [38] Inter-rater reliability of the radiographic assessment of simple bone cysts
    Cho, S.
    Yankanah, R.
    Babyn, P.
    Stimec, J.
    Doria, A. S.
    Stephens, D.
    Wright, J. G.
    JOURNAL OF CHILDRENS ORTHOPAEDICS, 2019, 13 (02) : 226 - 235
  • [39] Inter-rater reliability of the Illinois Structured Decision Support Protocol
    Kang, HA
    Poertner, J
    CHILD ABUSE & NEGLECT, 2006, 30 (06) : 679 - 689
  • [40] The place of inter-rater reliability in qualitative research: An empirical study
    Armstrong, D
    Gosling, A
    Weinman, J
    Marteau, T
    SOCIOLOGY-THE JOURNAL OF THE BRITISH SOCIOLOGICAL ASSOCIATION, 1997, 31 (03): : 597 - 606