Skip to main content

Change Is Hard, But Measuring It Doesn’t Have To Be

By
Maegan Sady, PhD, ABPP-CN
Published
Updated

Reliable change indices can provide you with vital information to understand when statistically significant change has occurred—and can help you plan important next steps.

As a clinician, whenever we complete an assessment and find emotional/behavioral problems or low abilities, we hope that we can recommend or implement an effective intervention so the scores—and the client’s functioning—will improve the next time around. Although there is no perfect indicator for when someone has improved in their functioning, certain metrics can help us make an informed judgment.

Reliable change indices (RCIs) are one of these metrics, and PAR has been increasingly incorporating them into new tests as they are published as well as adding them to existing tests when possible. The PDD Behavior Inventory (PDDBI) is the latest product to benefit from this initiative, and we are happy to report that we are adding these metrics, at no additional charge, to PDDBI scoring on PARiConnect. If you use the PDDBI, we do hope you will refer to the new manual supplement to guide your understanding and interpretation of the indices, but you will not be charged for the individual reports, which can be run on any set of administrations for which a Score Report has been generated.

An RCI provides the clinician with a range of scores that would be expected for a client on retest if nothing changed about the client. The index is calculated from test–retest reliability and the standard deviations of test scores. Retest scores outside the calculated range are considered significantly different from baseline, indicating that change has occurred.

The clinician then uses clinical information to interpret the reason for the change—improvement could be attributed to intervention or medication, for example, whereas decline might be explained by disease progression or removal of services—as well as whether the change is clinically meaningful and what to recommend next.

One key to RCIs is using the right test–retest coefficient. Every professionally published test employs a test–retest sample to provide evidence for stability or consistency of scores over a short time period (usually days to a couple of weeks). Although RCIs can be constructed from these samples, utilizing a short time frame can inflate practice effects and underestimate the amount of variability in scores that would be expected over a more clinically realistic retest time frame of several months or more. These longer, more clinically salient intervals are harder and more expensive to collect because subjects have to be retained over a longer interval. Additional sources of measurement error and variability can make the metrics unstable.

Our data collection team works hard to procure the high-quality data we need to build metrics like RCIs. We often partner with clinicians to serve as expert consultants and beta reviewers and to collect or supply both normative and clinical data. For the PDDBI RCIs, we were able to partner with two large clinical research centers to build the new reports. One of the centers shared data from their well designed longitudinal study of children with autism, allowing us to calculate RCIs using their 6-month follow-up data. The new PDDBI Progress Monitoring Report on PARiConnect compares the calculated RCIs to the client’s own scores across two to four time periods, providing detailed, colorful charts and tables that highlight where change has and has not occurred. The results can be used to better understand whether a child is improving, maintaining progress or stagnating, or declining over time.

Having RCI metrics further extends the clinical utility of the PDDBI, as significant changes can be identified quickly, saving the clinician time while also providing valuable information to guide future treatment. When a child improves in one area but not another, the therapist can devise new treatment goals to better target the area of functioning that needs improvement. Maintenance of previously strong skills (identified by no significant change between administrations) might reassure the clinician that additional intensive intervention is not needed in a certain area. A decline or lack of progress in another area might indicate where skills are not emerging as expected, and the clinician can recommend additional services.

Of note, RCIs were also computed between raters for the new PDDBI Multirater Reports so clinicians can statistically compare parent and teacher ratings to identify significant differences in behaviors and abilities across settings. See parinc.com/PDDBI for more detailed information on the new manual supplement and reports.

RCIs are increasingly becoming part of practitioners’ lexicon, as progress monitoring is increasingly requested by parents or referral sources, and in some cases they are required by insurance to justify ongoing treatment.

In offering RCIs, the PDDBI joins other top-notch tests like the Child and Adolescent Memory Profile (ChAMP), the Feifer Assessment of Reading (FAR), the Feifer Assessment of Mathematics (FAM), the Feifer Assessment of Writing (FAW), Identi-Fi, the Neuropsychological Assessment Battery (NAB), the Reynolds Intellectual Assessment Scales, Second Ed. (RIAS-2), Trails-X, and the Trauma Symptom Inventory-2 (TSI-2).

We hope to continue to add clinically relevant RCIs to new tests as they are developed or revised. Please let us know if you find them useful, if there are other tests where you would like to see RCIs, or if you have a longitudinal data set and want to discuss a partnership. We love working with and for clinicians like you!

reliable change RCIs
Project Director, Maegan Sady

Meet the Author

Maegan Sady, PhD, ABPP-CN