Skip to main content

Table 3 Interrater reliability for experienced raters with > 20 years of clinical experience and novice rater with ≤4 years of clinical experience

From: Visual assessment of movement quality: a study on intra- and interrater reliability of a multi-segmental single leg squat test

Raters

PAa

Kappab (CI 95%)

Kappamaxc

PId

BIe

PABAKf (CI 95%)

Experienced

 Rater 1 vs. Rater 2

  Foot

1.0

1.00 (1.00–1.00)

1.0

0.91

0

1.00 (1.00–1.00)

  Knee

0.71

0.42 (0.21–0.64)

0.73

0.09

−0.14

0.42 (0.19–0.64)

  Pelvis

0.77

0.44 (0.22–0.66)

0.52

0.46

− 0.20

0.54 (0.33–0.75)

  Trunk

0.86

0.63 (0.40–0.85)

0.71

0.52

−0.11

0.72 (0.55–0.90)

  All segmentsg

0.84

0.57 (0.46–0.68)

0.71

0.50

−0.11

0.67 (0.58–0.76)

Novice

 Rater 3 vs. Rater 4

  Foot

0.99

0.66 (0.02–1.00)

0.66

0.95

0.02

0.97 (0.91–1.00)

  Knee

0.88

0.41 (0.10–0.72)

0.88

0.69

−0.03

0.69 (0.51–0.87)

  Pelvis

0.88

0.44 (0.12–0.76)

0.58

0.75

0.09

0.75 (0.60–0.92)

  Trunk

0.89

0.68 (0.46–0.90)

0.68

0.58

0.11

0.79 (0.63–0.94)

  All segmentsg

0.90

0.55 (0.40–0.70)

0.79

0.75

0.05

0.80 (0.73–0.87)

All raters

PAa

Generalised kappah (CI 95%)

   

PABAKf (CI 95%)

 Rater 1–4

  All segmentsg

0.85

0.52 (0.43–0.61)

   

0.70 (0.65–0.76)

  1. aPA Percent agreement
  2. bKappa: Cohen’s kappa, calculated by; \( \upkappa =\frac{P_o-{P}_c}{1-{P}_c} \)
  3. Where; Po (observed agreement) \( =\frac{a+d}{n} \) and Pc (chance agreement) \( =\frac{\left(\frac{f_1x{g}_1}{n}\right)+\left(\frac{f_2x{g}_2}{n}\right)}{n} \)
  4. cKappamax: Is calculated so that the proportions of positive and negative judgements by each rater (i.e. the marginal totals) are taken as fixed, and the distribution of paired ratings (i.e. the cell frequency a,b,c and d) is adjusted so as to represent the greatest possible agreement. That will say, the maximum possible agreement for either presence or absence of the disease is the smaller of the marginal totals in each case [39]
  5. dPI: Prevalence index, calculated by; \( PI=\frac{a-d}{n} \)
  6. eBI Bias index, calculated by; \( BI=\frac{b-c}{n} \)
  7. fPABAK: Prevalence-adjusted bias-adjusted kappa, calculated by; PABAK = 2P0 − 1
  8. gAll segments: Denotes a merged kappa coefficient for the interrater reliability of each of the segments together (foot, knee, pelvis and trunk)
  9. hGeneralised kappa: A generalisation of Scott’s pi presented by Fleiss in order to calculate the interrater reliability of multiple raters [40, 42]