Back 

Newsletter Articles

Newsletter Articles

Relative Stay Index (RSI) version 2.0

Categories: | Author: Rohan Cattell | Posted: 23/11/2018 | Views: 5041

 

Relative Stay Index (RSI) version 2.0


Rohan Cattell,
Chief Data Scientist

The Health Roundtable

linkedin

 

In this first of three articles, Chief Data Scientist Dr Rohan Cattell walks through his reflections on:

  • The RSI model and its history, which has been a fantastic servant to the HRT for nearly twenty years
  • How the power of predictive analytic models give the opportunity for an RSI v2.0, and our context for reviewing the algorithm

 
In Part 2, Dr Cattell will dig into the technical detail some more, outlining how the HRT team will be developing a revised RSI model.

Part 1

 
When I started working with the Health Roundtable in 2010, one of the first things I had to get my head around was the Relative Stay Index (aka RSI). RSI is a risk-adjusted measure of ‘length of stay’ that holds a semi-mystical place in the HRT family. It was, however, off bounds to me. The arcane rules that governed its calculation were not to be messed with, having been handed down through the mists of time.

I bided my time, and now finally I have the task of bringing our RSI methodology forward a little. Having been handed the keys to the family car, I sit, contemplating where we will go. Or that’s where I was a month ago. Since then we’ve been at work, iterating on a new model for the RSI. We’re not quite finished, but we want to bring you on our journey, so get in and buckle up, we’re going to take this thing for a spin.

This will be a multi-part blog post. Part 1, will cover some background, part 2 will be more technical, for those who want to know the details of the modelling, and part 3 will round up the results and look at how the changes will affect our members.

For those who are not familiar with RSI, here’s a quick introduction. For each admitted episode of care at our member hospitals, we calculate an expected length of stay (ELOS). ELOS is based on the episode’s DRG, care type, urgency status, transfer status and separation mode and the patient’s age at admission. RSI is then calculated at either the whole of hospital level, or within a subgroup such as a DRG:  

Equivalently, we can state this as:

 
It is usually expressed as a percentage, so an RSI over 100% indicates a longer aggregate length of stay than expected and below 100% indicates a shorter aggregate length of stay than expected, with reference to the particular variables we have adjusted on and the particular dataset that we used to train the model. The resulting RSI is then used to track performance and identify areas for improvement.

Nothing about the RSI formula itself is changing, only the way we calculate ELOS. That is, the predictive model is getting an update. This includes several things:

  • What is included or excluded from the model
  • What variables we include in the model
  • What kind of statistical model we are using

 
 
So how does the current model work? It is very simple. All variables are treated as categorical (age is broken into groups) and what has been called (by its originator) a stratification model is applied across some reference period of data (historically this has been a 3 year period of all HRT admitted patient data). The stratification is one of the simplest possible models, it simply takes the average in each group from the reference period as the predicted LOS for any future observation falling in that group. By group here we mean the particular combination of DRG, age group, urgency status etc.

I can feel the statisticians squirming, but it’s not as bad as it might seem at first and there are historical reasons for this approach. Here are some pros and cons of this approach.

Pros:

  • Very fast to train (in 2018). I wrote some R code to recreate the model and it runs in a couple of seconds across a reference period with 15 million rows of data.
  • The massive reference dataset makes it much more reasonable than it would be as a technique on a small dataset. Overfitting is still an issue, but not to the same extent.
  • Very easy to explain
  • Very easy to share results in a lookup table that can be used by others

Cons:

  • Overfitting, especially in the smaller DRGs. I’ll discuss this more when we get into the technical details.
  • Breaking continuous variables into groups reduces predictive accuracy and is not necessary if you use a more advanced technique (i.e. regression)
  • The categorical only nature of the algorithm influences decisions about what variables to include, e.g. each DRG and not ADRG + ECCS. I’ll say more about this in part 2.

 
 
To be clear, the RSI has done an outstanding job for nearly twenty years and has been used operationally across Australia and NZ to great success. None of these observations diminishes that, but they do help us to prepare for the next twenty.

A bit of historical context will help us to understand why RSI was created the way it was.

Dr David Dean, the founding general manager of The Health Roundtable until his recent retirement has been kind enough to share some details of the early development. I think it is fair to say that David is the father of RSI. Here’s the title page from a March 2000 report that included the precursor to RSI, which was just a group average LOS within clinical service groups (CSGs).

 
The following year, changes in ALOS were tracked across the whole cohort and shown in a report highlighting improvements and lagging CSGs.

Sometime following this, the RSI was first calculated as an evolution of the ALOS. The original has been tweaked a bit over the years, but the underlying method has stayed the same. I wasn’t doing statistical modelling in 2000 but I can appreciate that computing power was somewhat different and the simple approach taken was deemed sufficient given the purpose, which in David Dean’s words was “to get managers and clinicians to discuss operational improvements in length of stay for large cohorts of patients”.

So why change it now? If it ain’t broke, don’t fix it. Well, the main reason is that the opportunity has presented itself. In modernising our systems we have the opportunity at the same time, to update the processes that calculate the RSI. Modern computing and software give us a relatively straightforward path to this and there are obvious ways in which the modelling can be improved. I’m a firm believer that we have a duty to do the best we can in terms of the predictive models we build for indicators like the RSI. If we have a clear path to an improvement then we should take it. This oft-quoted remark of George Box comes to mind “All models are wrong, but some are more useful than others.”

The ultimate goal of any improvement will be to reduce the noise in the RSI. In turn, this would, in theory, allow better targeting of interventions off the back of it. I’m perfectly prepared though that after we implement the changes, that we discover the choice of model is not a significant factor in the variation we see. In many of the large volume DRGs, that is a likely outcome and I’ll be happy with that. That in itself though would be an important outcome to observe.

To be continued. In part 2 we’ll look at how we are doing the new modelling.

Bookmark and Share

Return to previous page
Sydney Office

Suite 2, Level 10, 5 Blue Street

North Sydney, 2060

Contact Us

In Australia: +61 (02) 8041 1421

In New Zealand: +64 3741 3123

Email Us
Partners and Members

 



 
© 1995-2019 The Health Roundtable