Intraclass Correlation Coefficient

Intraclass Correlation Coefficient

Please check the following documents.

It can run on SPSS, but data cannot be interpreted about kappa, weighted kappa, ICC, consistency, and agreement.

  • FIM_TOTAL1 and FIM_TOTAL1R
Correlations
FIM_TOTAL1 FIM_TOTAL1R
FIM_TOTAL1 Pearson Correlation 1 .912**
Sig. (2-tailed) .000
N 209 209
FIM_TOTAL1R Pearson Correlation .912** 1
Sig. (2-tailed) .000
N 209 209
**. Correlation is significant at the 0.01 level (2-tailed).

There is a significant strong positive correlation between FIM_TOTAL1and FIM_TOTAL1R

  • RAI_TOTAL1 and RAI_TOTAL1R
Correlations
RAI_TOTAL1 RAI_TOTAL1R
RAI_TOTAL1 Pearson Correlation 1 .935**
Sig. (2-tailed) .000
N 209 209
RAI_TOTAL1R Pearson Correlation .935** 1
Sig. (2-tailed) .000
N 209 209
**. Correlation is significant at the 0.01 level (2-tailed).

There is a significant strong positive correlation between RAI_TOTAL1and RAI_TOTAL1R 

SAMP WEEK 6

Calculate Intraclass Correlation Coefficient (ICC) and Kappa and Weighted Kappa:

Calculate the ICC for the following pairs of measures (In SPSS, use: Analyze > Scale > Reliability Analysis):

  • FIM_TOTAL1 and FIM_TOTAL1R
  • RAI_TOTAL1 and RAI_TOTAL1R
  • SRH1 and SRH1R
  • QoL1 and QoL1R
  • In these calculations, consider whether you are interested in “consistency” or “agreement” as discussed in the course text (Streiner, Norman and Cairney, p. 167-169), and whether you are considering the times of the ratings (time 1 and time 2) as fixed or random factors.
  • Compare the ICC values with the Pearson’s correlations obtained in Week 1; discuss how would you interpret the differences in the coefficients.

Calculate Kappa:

  • In SPSS, for Kappa, use: Analyze > Crosstabs
  • Take the admission and discharge scores for two individual items from the RAI or FIM scales (e.g., RAI_WALK1 and RAI_WALK2) and calculate kappa.
  • Calculate kappa for SRH1 and SRH1R
  • Use quadratic weights for weighted kappa. Briefly discuss differences in results between weighted and unweighted kappa.

Calculate Weighted Kappa:

  • Calculate weighted kappa for the same items used above (RAI and/or FIM items, SRH)
  • In SPSS, use: Analyze > Crosstabs, to create a table
  • Using the values from the table, and using a quadratic weighting scheme, calculate weighted kappa using the utility in the VassarStats website.

Part 1 calculations

  • FIM_TOTAL1 and FIM_TOTAL1R
Correlations
FIM_TOTAL1 FIM_TOTAL1R
FIM_TOTAL1 Pearson Correlation 1 .912**
Sig. (2-tailed) .000
N 209 209
FIM_TOTAL1R Pearson Correlation .912** 1
Sig. (2-tailed) .000
N 209 209
**. Correlation is significant at the 0.01 level (2-tailed).

There is a significant strong positive correlation between FIM_TOTAL1and FIM_TOTAL1R

  • RAI_TOTAL1 and RAI_TOTAL1R
Correlations
RAI_TOTAL1 RAI_TOTAL1R
RAI_TOTAL1 Pearson Correlation 1 .935**
Sig. (2-tailed) .000
N 209 209
RAI_TOTAL1R Pearson Correlation .935** 1
Sig. (2-tailed) .000
N 209 209
**. Correlation is significant at the 0.01 level (2-tailed).

There is a significant strong positive correlation between RAI_TOTAL1and RAI_TOTAL1R 

Solution 

SAMP WEEK 6

Part 1 calculations

  • FIM_TOTAL1 and FIM_TOTAL1R
Correlations
FIM_TOTAL1 FIM_TOTAL1R
FIM_TOTAL1 Pearson Correlation 1 .912**
Sig. (2-tailed) .000
N 209 209
FIM_TOTAL1R Pearson Correlation .912** 1
Sig. (2-tailed) .000
N 209 209
**. Correlation is significant at the 0.01 level (2-tailed).

There is a significant strong positive correlation between FIM_TOTAL1and FIM_TOTAL1R

Intraclass Correlation Coefficient
Intraclass Correlationb 95% Confidence Interval F Test with True Value 0
Lower Bound Upper Bound Value df1 df2 Sig
Single Measures ,909a ,882 ,930 21,145 208 208 ,000
Average Measures ,952c ,938 ,964 21,145 208 208 ,000
Two-way mixed effects model where people effects are random and measures effects are fixed.
a. The estimator is the same, whether the interaction effect is present or not.
b. Type A intraclass correlation coefficients using an absolute agreement definition.
c. This estimate is computed assuming the interaction effect is absent, because it is not estimable otherwise.

Two-way mixed model as observations are random, but raters are not.

Type is absolute agreement, as systematic differences are relevant.

The Pearson’s correlation coefficient is pretty close to single measure ICC, which is significant and excellent as its far above 0.75.

  • RAI_TOTAL1 and RAI_TOTAL1R
Correlations
RAI_TOTAL1 RAI_TOTAL1R
RAI_TOTAL1 Pearson Correlation 1 .935**
Sig. (2-tailed) .000
N 209 209
RAI_TOTAL1R Pearson Correlation .935** 1
Sig. (2-tailed) .000
N 209 209
**. Correlation is significant at the 0.01 level (2-tailed).

There is a significant strong positive correlation between RAI_TOTAL1and RAI_TOTAL1R

Intraclass Correlation Coefficient
Intraclass Correlationb 95% Confidence Interval F Test with True Value 0
Lower Bound Upper Bound Value df1 df2 Sig
Single Measures ,930a ,903 ,949 29,764 208 208 ,000
Average Measures ,964c ,949 ,974 29,764 208 208 ,000
Two-way mixed effects model where people effects are random and measures effects are fixed.
a. The estimator is the same, whether the interaction effect is present or not.
b. Type A intraclass correlation coefficients using an absolute agreement definition.
c. This estimate is computed assuming the interaction effect is absent, because it is not estimable otherwise.

Two-way mixed model as observations are random, but raters are not.

Type is absolute agreement, as systematic differences are relevant.

The Pearson’s correlation coefficient is pretty close to single measure ICC, which is significant and excellent as its far above 0.75.

(single measure is for single rater, average measure is higher and is for reliability of different raters averaged together)

 

Correlations

SRH1 SRH1R
SRH1 Pearson Correlation 1 ,837**
Sig. (2-tailed) ,000
N 209 209
SRH1R Pearson Correlation ,837** 1
Sig. (2-tailed) ,000
N 209 209
**. Correlation is significant at the 0.01 level (2-tailed).
Intraclass Correlation Coefficient
Intraclass Correlationb 95% Confidence Interval F Test with True Value 0
Lower Bound Upper Bound Value df1 df2 Sig
Single Measures ,834a ,788 ,871 11,147 208 208 ,000
Average Measures ,910c ,881 ,931 11,147 208 208 ,000
Two-way mixed effects model where people effects are random and measures effects are fixed.
a. The estimator is the same, whether the interaction effect is present or not.
b. Type A intraclass correlation coefficients using an absolute agreement definition.
c. This estimate is computed assuming the interaction effect is absent, because it is not estimable otherwise.

Two-way mixed model as observations are random, but raters are not.

Type is absolute agreement, as systematic differences are relevant.

The Pearson’s correlation coefficient is pretty close to single measure ICC, which is significant and excellent as its far above 0.75.

Correlations
QoL1 QoL1R
QoL1 Pearson Correlation 1 ,905**
Sig. (2-tailed) ,000
N 209 209
QoL1R Pearson Correlation ,905** 1
Sig. (2-tailed) ,000
N 209 209
**. Correlation is significant at the 0.01 level (2-tailed).
Intraclass Correlation Coefficient
Intraclass Correlationb 95% Confidence Interval F Test with True Value 0
Lower Bound Upper Bound Value df1 df2 Sig
Single Measures ,890a ,847 ,919 18,495 208 208 ,000
Average Measures ,942c ,917 ,958 18,495 208 208 ,000
Two-way mixed effects model where people effects are random and measures effects are fixed.
a. The estimator is the same, whether the interaction effect is present or not.
b. Type A intraclass correlation coefficients using an absolute agreement definition.
c. This estimate is computed assuming the interaction effect is absent, because it is not estimable otherwise.

Two-way mixed model as observations are random, but raters are not.

Type is absolute agreement, as systematic differences are relevant.

The Pearson’s correlation coefficient is pretty close to single measure ICC, which is significant and excellent as its far above 0.75.

RAI_WALK1 * RAI_WALK2 Crosstabulation
Count
RAI_WALK2 Total
0 1 2 3 4 5 6
RAI_WALK1 0 25 0 0 1 0 0 0 26
1 6 1 0 0 0 0 0 7
2 67 9 7 0 1 0 1 85
3 8 5 2 0 0 0 0 15
4 23 4 4 0 1 0 1 33
5 9 5 6 2 0 3 0 25
6 7 2 4 1 2 0 2 18
Total 145 26 23 4 4 3 4 209
Symmetric Measures
Value Asymptotic Standardized Errora Approximate Tb Approximate Significance
Measure of Agreement Kappa ,051 ,022 2,418 ,016
N of Valid Cases 209
a. Not assuming the null hypothesis.
b. Using the asymptotic standard error assuming the null hypothesis.

Unweighted Kappa is poor, but significant 0.051.

Kappa with Quadratic Weighting based on atatched webpage is 0.1852.

Weighted kappa are meaningful only if the categories are ordinal and if the weightings ascribed to the categories faithfully reflect the reality of the situation.

Whereas unweighted kappa does not distinguish among degrees of disagreement, weighted kappa incorporates the magnitude of each disagreement and provides partial credit for disagreements when agreement is not complete.

SRH1 * SRH1R Crosstabulation
Count
SRH1R Total
1 2 3 4 5
SRH1 1 8 11 2 0 0 21
2 3 40 9 0 0 52
3 1 8 33 15 1 58
4 0 1 8 31 9 49
5 0 0 0 12 17 29
Total 12 60 52 58 27 209
Symmetric Measures
Value Asymptotic Standardized Errora Approximate Tb Approximate Significance
Measure of Agreement Kappa ,503 ,043 13,679 ,000
N of Valid Cases 209
a. Not assuming the null hypothesis.
b. Using the asymptotic standard error assuming the null hypothesis.

Unweighted Kappa is moderate, and strongly significant.

Kappa with Quadratic Weighting is 0.8338, which is very good.

Value of K Strength of agreement
< 0.20 Poor
0.21 – 0.40 Fair
0.41 – 0.60 Moderate
0.61 – 0.80 Good
0.81 – 1.00 Very good