Mann Kerry J, O'Dwyer Nicholas, Bruton Michaela R, Bird Stephen P, Edwards Suzi
School of Allied Health, Exercise and Sports Science, Charles Sturt University.
The Discipline of Exercise Science, The University of Sydney.
Int J Sports Phys Ther. 2022 Jun 1;17(4):593-604. doi: 10.26603/001c.35666. eCollection 2022.
Movement competency screens (MCSs) are commonly used by coaches and clinicians to assess injury risk. However, there is conflicting evidence regarding MCS reliability.
This study aimed to: (i) determine the inter- and intra-rater reliability of a sport specific field-based MCS in novice and expert raters using different viewing methods (single and multiple views); and (ii) ascertain whether there were familiarization effects from repeated exposure for either raters or participants.
Descriptive laboratory study.
Pre-elite youth athletes (n=51) were recruited and videotaped while performing a MCS comprising nine dynamic movements in three separate trials. Performances were rated three times with a minimal four-week wash out between testing sessions, each in randomized order by 12 raters (3 expert, 9 novice), using a three-point scale. Kappa score, percentage agreement and intra-class correlation were calculated for each movement individually and for the composite score.
Fifty-one pre-elite youth athletes (15.0±1.6 years; =33 athletics, =10 BMX and =8 surfing) were included in the study. Based on kappa score and percentage agreement, both inter- and intra-rater reliability were highly variable for individual movements but consistently high (>0.70) for the MCS composite score. The composite score did not increase with task familiarization by the athletes. Experts detected more movement errors than novices and both rating groups improved their detection of errors with repeated viewings of the same movement.
Irrespective of experience, raters demonstrated high variability in rating single movements, yet preliminary evidence suggests the MCS composite score could reliably assess movement competency. While athletes did not display a familiarization effect after performing the novel tasks within the MCS for the first time, raters showed improved error detection on repeated viewing of the same movement.
Cohort study.
运动能力筛查(MCSs)被教练和临床医生广泛用于评估损伤风险。然而,关于MCS可靠性的证据存在冲突。
本研究旨在:(i)使用不同的观察方法(单视图和多视图),确定在新手和专家评估者中,基于运动专项场地的MCS的评估者间和评估者内信度;(ii)确定评估者或参与者反复接触是否会产生熟悉效应。
描述性实验室研究。
招募了51名准精英青年运动员,在他们进行包含九个动态动作的MCS时进行录像,共进行三次单独试验。测试期间,每次测试间隔至少四周,由12名评估者(3名专家,9名新手)以随机顺序对表现进行三次评分,使用三点量表。分别计算每个动作以及综合评分的卡帕分数、百分比一致性和组内相关系数。
51名准精英青年运动员(15.0±1.6岁;33名田径运动员,10名小轮车运动员,8名冲浪运动员)纳入研究。基于卡帕分数和百分比一致性,单个动作的评估者间和评估者内信度高度可变,但MCS综合评分始终较高(>0.70)。综合评分并未因运动员对任务的熟悉而提高。专家比新手发现更多的动作错误,并且两个评分组在反复观看相同动作后对错误的检测能力均有所提高。
无论经验如何,评估者在对单个动作评分时表现出高度变异性,但初步证据表明MCS综合评分可以可靠地评估运动能力。虽然运动员在首次执行MCS中的新任务后未表现出熟悉效应,但评估者在反复观看相同动作后对错误的检测能力有所提高。
队列研究。