Patient-reported outcome measures (PROMs) are essentially questionnaire scales designed to measure the treatment outcomes that matter to patients. These are usually characterised as ‘health status’ or ‘health-related quality of life’, but really there’s no reason to limit the scope of PROMs to these domains: you could argue, for example, that the numerous measures of severity of depression (PHQ-9, BDI etc.) are PROMs in their own right.
The NHS Information Centre states that:
“The health status information collected from patients by way of PROMs questionnaires before and after an intervention provides an indication of the outcomes or quality of care delivered to NHS Patients”
A key feature of PROMs analysis is the reliability of the data: the degree of measurement error entailed in collecting data using a PROM. Concerns about reliability can be dismissed because reliability is an index of unbiased measurement error and, well, that just cancels out with large samples, doesn’t it? Well, not always. And especially not if you use ‘change scores’.
The natural thing to do with a set of before and after scores is to subtract the ‘before’ from the ‘after’ and call that a ‘change score’ or ‘health gain’:
change score = health gain = [post-treatment score] – [pre-treatment score]
There are two related problems when you do this with PROMs data:
- The reliability of the change score is usually much lower than the reliability of the pre- and post- scores
- The correlation between pre-score and change score is usually strongly biased in the negative direction
What this means in practice is:
- Change scores tend to have greater measurement error than the scores they were derived from
- They tend also to have moderate (but spurious) negative correlations with pre-scores
Point (2) is interesting because it can be misinterpreted as an interaction effect: the treatment appears most effective for patients with the lowest pre-treatment scores. But the effect is entirely spurious.
What’s also interesting is that the size of the (spurious) negative correlation between pre-treatment score and change score increases as the reliability of the PROM decreases: the greater the measurement error, the larger the (spurious) correlation, and the more likely it is be significant.
So, to conclude: change scores are generally a bad idea for PROMs data, and the more unreliable the PROMs data, the worse the problem is.