Modified tuck jump assessment: reliability and training of raters.

Author:Smith, Craig A.
Position::Letter to the editor
 
FREE EXCERPT

Dear Editor-in-chief

We are writing with regard to "Intra- and inter-rater reliability of the modified tuck jump assessment," by Fort-Vanmeerhaeghe et al. (2017) published in the Journal of Sports Science & Medicine. The authors reported on the reliability of the modified Tuck Jump Assessment (TJA). The purpose of the article was twofold: to introduce a new scoring methodology and to report on the interrater and intrarater reliability. The authors found the modified TJA to have excellent interrater reliability (ICC = 0.94, 95% CI = 0.88-0.97) and intrarater reliability (rater 1 ICC = 0.94, 95% CI = 0.88-0.9; rater 2 ICC = 0.96, 95% CI = 0.92-0.98) with experienced raters (n = 2) in a sample of 24 elite volleyball athletes. Overall, we found the study to be well conducted and valuable to the field of injury screening; however, the study did not adequately explain how the raters were trained in the modified TJA to improve consistency of scoring, or the modifications of the individual flaw "excessive contact noise at landing." This information is necessary to improve the clinical utility of the TJA and direct future reliability studies.

The TJA has been changed at least three times in the literature: from the initial introduction (Myer et al., 2006) to the most referenced and detailed protocol (Myer et al., 2011) to the publication under discussion (Fort-Vanmeerhaeghe et al., 2017). The initial test protocol was based upon clinical expertise and has evolved over time as new research emerged and problems arose with the original TJA. Initially, the TJA was scored on a visual analog scale (Myer et al., 2006), changed to a dichotomous scale (0 for no flaw or 1 for flaw present) (Myer et al., 2011) and most recently modified using an ordinal scale (Fort-Vanmeerhaeghe et al., 2017). A significant disparity in the reported interrater and intrarater reliability arose with the dichotomously scored TJA, between those involved in the development of the TJA (Herrington et al., 2013) and other researchers who were not involved (Dudley et al., 2013). Dudley, et al. (2013) reported the lack of a clarity in protocol and rater training in the dichotomous TJA description (Myer et al., 2011), and these limitations may have contributed to the poor to moderate reliability found in their study of varied raters with differing educational backgrounds. Possibly in reference to the issues brought up in Dudley, et al. (2013), Fort-Vanmeerhaeghe et al. (2017) suggested that...

To continue reading

FREE SIGN UP