Will 2022 Be The Year For Machine Learning In Healthcare?

As we end 2021, we’re seeing once again many predictions that 2022 will be the Year of Machine Learning in Healthcare.  Having been in the field for several years now, we’re delighted by the optimism!  While we can’t say for sure whether 2022 will or won’t be “The Year of Machine Learning”, we can say that there are still some significant challenges to using Machine Learning in Healthcare.  Below we discuss one particularly thorny issue: the tradeoff between model predictive power and explainability.

Computer scientists build Machine Learning predictive models based on historical data and endpoints.  Historical data is all of the data healthcare workers collect and store in Electronic Health Record Systems; data like vital signs, diagnoses, medication orders, assessments, lab results, and progress notes.  Endpoints are events that have happened in the past that clinicians would like to predict in the future.  These are events like rehospitalizations, correct diagnoses, how many days it will take for a wound to heal, and whether or not a tumor is malignant.

The good news from the perspective of data scientists like us is that the Healthcare Industry has done an amazing job entering an enormous amount of historical data into EHR systems over the last few decades.  Data scientists can now build really good models to make predictions that meaningfully improve outcomes for patients.  Using these predictions, clinical workers can take actions that make it more or less likely that outcomes we predict will come true.

One example of this kind of system is the one that our company, SAIVA, has developed for clinicians at nursing homes.  SAIVA uses AI Machine Learning to predict which patients within a nursing home are at greatest risk of rehospitalization over the next 3 days.  Nurses use our predictions to take actions to reduce rehospitalizations.  Our system has proven to be quite accurate – about 80% of patients who are on Medicare and rehospitalize are on our list 1-3 days prior to rehospitalization.

While there’s no doubt that knowing who is at risk is useful in reducing rehospitalization, it’s really only a first step.  Clinicians want to know not only that someone is at risk but why they’re at risk – which body system, chronic disease, medicine, lab result, progress note has caused our predictive model to calculate that a significant change in risk has occurred.  If we could specifically identify which body system, disease, lab report etc, caused the change in risk, we could improve care enormously.

And now we come to the Hard Truth: humanity has not developed the technology to do this.  At least not yet.  More precisely, given the state of the art in Machine Learning today, we can’t fully explain how complex models work.  At this point, we can only say that they do work, that is, that they predict with a certain quantifiable degree of certainty that a patient is at risk or, more generally, that a particular outcome will occur with some probability.

To most non-computer scientists, this seems very counterintuitive.  How can it be that we have a model that predicts an event with great accuracy but we can’t tell exactly why the model has made the prediction it has?

The answer has to do with a fundamental tradeoff: we can build simple models that are easily explainable but don’t do a very good job predicting complex phenomena OR we can build complex models that do a good job predicting complex phenomena but are hard to explain.  We can’t do both – that is, we can’t build a complex model that will both make accurate predictions AND is easy to explain.

Here’s a concrete example.  Imagine we want to create a model to predict blood-borne infections.  We could start with the simple rule that a fever is highly correlated with the onset of such an infection.  This model is trivially simple to explain: it is predicting blood-borne infections based only on whether the patient has a fever.

Of course, the problem with this model is that it gives way too many false-positives; many patients will have fevers but not blood-borne infections.   To get a model that accurately predicts blood-borne infections, we have to vastly increase the number of variables the model takes into account.  And as soon as we do that, all of the variables will begin interacting with each other and we won’t be able to easily explain why the model is making the predictions it’s making.

Fortunately, the problem of explainable Artificial Intelligence is an extremely active topic of research within data science.  Many researchers are confident that the technology for explaining predictions from complex models will improve dramatically over the next few years.

In the meantime, we need to do the best we can with the technologies we have today.  And fortunately, we think we can do quite a lot!  We may not be able to precisely explain why a particular patient is at risk but being able to identify a patient as being at risk is a major advance.  Over the next few years, particularly as model explainability technology improves, we can look forward to fantastic improvements in healthcare based on Machine Learning technology.

Back to Blog

Subscribe to Reduce your Rehospitalizations up to 52%