Tecton

Enabling Rapid Model Deployment in the Healthcare Setting

apply(conf) - May '22 - 10 minutes

Discover how Vital powers its predictive, customer-facing, emergency department wait-time product with request-time input signals and how it solves its “cold-start” problem by building machine-learning feedback loops using Tecton.

Hey folks. My name’s Felix Brann, I’m Head of Data Science at Vital. We are deploying online machine learning in the healthcare space, and I thought it’d be interesting to sort of actually talk about our use case a little bit. Hope you find that’s interesting and rather than just shameless company, self-promotion. I’ll let you decide in the chat. And then we’re going to talk about an ML problem that we face and sort of how we go about resolving it.

We’re a company deploying modern UI and ML into hospitals. Hospitals tend to have these legacy IT systems that just don’t have the kind of ML capabilities and modems that just don’t have the kind of user experience that you’d expect out of modern consumer software. And it means that doctors and nurses are spending upwards of 60% of their time, just typing notes into note taking systems. I’ve seen somebody click five different plus boxes to try and get to the right option before in a busy ER department. And so we’re trying to sort of solve some of the problems that they face.

I’ll quickly rattle through this. This is just our product suite. We’ve got this product for patients in the emergency department, product for patients in the hospital, and then this is our clinician app where we’re doing some clinical decision support stuff.

I want to talk about wait times. So 50% of patients who come into the emergency department, they’ve actually never been to that ED before. They’re often in pain, they’re frightened. They don’t know what to expect, and the only way for them to get any of that information is actually for them to grab a nurse on the floor or a doctor and try and get that understanding of what’s going to happen to them out of that provider. And right now, I don’t know if you know this, but in the healthcare setting, there’s an enormous staffing problem as a result of all of the world events that I’m sure you are well aware of. And that means that patients coming in and asking how long it’s going to be until they see X means that providers aren’t able to provide critical care to people who need it.

This is sort of one slice of our product suite. We’re trying to take the wait time problem and automate it. Take it out of provider hands and sort of give patients an understanding of what their stage’s going to look like. And the key component to this that I want to talk to you folks about is our ED Wait Time product. We are trying to produce for the different stages of an ED visit….

Hopefully you’ve never been to the ed. If you have, I’m sure you’re aware that you sort of come in, you register and then you’re going to be waiting to be triaged. And then once a nurse sort of had the time to see you, takes some basic information about you, you stick around in the waiting room until someone has a bed for you. And this is the part that people find the most stressful: not knowing how long they’re going to be waiting in the emergency room, particularly in the waiting room until they can see a doctor. They’re probably pretty concerned about whatever reason that turned up to the ED to talk to somebody about. And we are trying to provide personalized wait times in obviously a streaming case updating live, based on conditions that we’re experiencing in the ED.

Okay, let’s actually talk machine learning. One of our key problems is the cold start problem. We’re going live at new facilities every month. Integrating with these facilities is pretty painful. One of the innovations that we’re trying to bring is rather than integrating a new software system in a matter of multiple years, which is of the real case with the HR, we’re trying to get in there in a matter of weeks or even days. But what that means is when we go live, we actually have very little data and our hospital partners understandably expect us to be providing accurate machine learning from the day we go live. We don’t want to just take this little slice of data that we have from this hospital and try and do everything on that. We want to learn dynamics from all of the hospitals where we’re live and bring those dynamics into whichever this new facility is.

But there’s a couple of key problems there. The first of them is that wait times can actually range from 30 minutes to over 10 hours. I’ve seen people waiting 10 hours in the emergency room just to see a doctor. And also, in the healthcare setting, data regime changes are really common and they’re a real problem.

Here’s an example of sort of regime change that I’m sure you’ll be aware of. On the X axis, the year of 2020, and on the Y axis, the number of people who were admitted to four different emergency rooms on the east coast and with the pandemic coming in, actually, you might be surprised to know that there was this precipitous drop in ED admissions.

And actually a lot of hospitals, suddenly the concern that they had was less around “are we going to be able to handle the surge of COVID patients?”, and it was more around, “are we going to be able to financially survive given that everybody else is too afraid to come into the emergency room?”. So in this time, we had this situation where you wanted to have as little contact between patients and doctors as possible. Every time a doctor has to speak to a patient, they had to put on full PPE. Patients had to be kept separated. We had these tents outside, sometimes in places that were pretty cold, holding patients.

And what we were trying to do is deploy the solution and enable patients to kind of have an understanding of what was going to happen in a particularly scary time. And this is sort of the resulting wait times from that period. And you can see, it sort of follows the regime change pattern, but there are some key differences, especially in this red facility here. And that means that we need to be pretty careful about aggregating together information from facilities and using that information at a facility where we don’t have a lot of information about the dynamics at the moment.

Our solution, as you might have anticipated is to normalize. So what we do is we gather features that you might expect to be useful when predicting the number of minutes that someone’s going to wait before they see a bed. For example, how many patients have recently been admitted in the last hour.

But we also gather summary statistics on those features. For that feature, we’re taking the meaning standard deviation here and feeding them into the model. And then on the output side, we’re not predicting in minutes, we’re actually predicting the percentile space. Just to be clear, the feature summaries that we’re taking, they’re obviously aggregated on a purposely level and they’re a rolling window. They sort of allow us to keep an understanding of, in this situation, if you have 20 patients arriving at this facility today, the model can understand, “oh, well, that’s two standard deviations away from what’s normal here”, or “that’s perfectly standard”. And then it can do…. This is what I described. This prediction in a percentile space.

So when we are making these model predictions, instead of predicting in minutes, the number of minutes that you’re going to be waiting for a bed or to see a doctor, we predict the percentile of the historical wait cumulative distribution function. What does that actually mean? It means, okay, so we’ve got two facilities here. We’ve got the orange one and the blue one. What we’re doing is we’re taking from the last two weeks, all of the wait steps for those facilities. So let’s say we’re sitting in the waiting room after triage and we’re waiting for a bed. We take all of the resulting waits and their real wait times. We order them. And if we are outputting north 0.8 in our model here, we take the 80th out of hundredth waits.

This allows our model to sort of predict into this latent space. And then we can project back using the recent rolling history of the facility. And if our model is outputting north 0.8 for the orange facility, you can see with the red arrows that results in a real prediction for the user of 60 minutes. But if the model’s predicting north 0.8 for the blue facility, that’s resulting in 280 minutes.

So it’s a Tecton conference. I thought we’d talk a little bit how we use Tecton to solve this problem. We want to calculate this submitted count concept. We also want to calculate this mean and standard deviation of that feature. What we’ve done is we’ve built a feature service for all of the features that we would like to normalize per facility, key by facility ID. This is a little bit more involved in this picture, but simplified, we have a lambda which is periodically extracting those features and it’s putting them into Redshift for us. We’re connecting Redshift as a data source in Tecton and in our little [inaudible 00:08:34], and produces the normalized statistics on top of those features because it’s treating the feature values as raw data source values.

Nine minutes. That’s everything that I had. Shameless promotion, we are hiring. If you’d like to come and work on a use case, a real streaming ML use case where it has tangible impact to people’s lives, come and find us at vitaler.com. Thanks folks.

Felix Brann

Head of Data Science

Vital

Felix is Head of Data Science at Vital, a startup deploying machine learning in the Emergency Room. Previously a VP within Quantitative Research at JP Morgan, Felix joined Vital's mission to inform patients and empower clinicians 2 years ago. He brings 12 years of industry experience to the task of researching, developing and deploying mission-critical machine learning models. When not worrying about data drift, Felix loves to climb, cook, and play overcomplicated boardgames.

Book a Demo

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Contact Sales

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button