This is a story of how a typical exercise test can accurately predict the future for someone with heart failure. And how, even in the realms of deep learning and artificial intelligence, a little luck can go a long way.
But, before we get to a remarkable new study, let’s first shine a light on the profoundly untapped pools of data that are sitting in most health clinics around the world: your average treadmill or stationary bike set up to monitor someone’s exercise capacity, for example.
Key to tapping such reservoirs of information is to not limit the power of data just so the human brain can understand it. Because what is lost in that shuffle can save lives.
“We must resist the urge to oversimplify data and discard information so that it’s easier for us to understand,” says Cedric Manlhiot, who leads the Ted Rogers Centre efforts in using and analyzing data at Peter Munk Cardiac Centre. “Most machines are programmed to reduce complex data to easy numbers for human consumption and when this is used by medical staff it can be dangerous. Because in doing so you throw away so much valuable knowledge.”
What we are missing with exercise tests
In a cardiopulmonary exercise test, sensors are placed on the body to measure elements such as oxygen consumption (VO2). In doing so, they generate data every second at very high speed. Manlhiot says they have been traditionally used to calculate summary metrics that are then used by physicians.
“But these summary indicators peak in the mid-70% accuracy,” he says. “Meanwhile, raw data is left unused because it can’t be easily visualized. From the thousands of data points that a machine generates, a busy doctor will only look at about six. Even worse, most machines are programmed to delete this unused data every time a test is completed.”
But, in a stroke of luck, his team discovered an exercise machine used in clinic whose data was not being purged after each test – and thus contained years’ worth of information. From this, they train a unique machine learning software (a “neural network”) to use not a fraction of data, but all of it. A limitless sea.
“To move the needle, we must relearn what data derived from sensors actually is,” Manlhiot says. “And how to analyze it.”
Their newly published study reveals the stunning possibilities.
In a world first, the researchers were able to hone in on breath-by-breath data, discovering that delivered a far clearer picture of someone’s health status than the simplified metrics we use today. It means that we have a clinically relevant machine learning model that can accurately predict a heart failure patient’s status one year from now.
“Our machine learning model uses the data generated in a test and calculates the probability that a given patient will be in end-stage heart failure in one year,” Manlhiot says.
This deep learning framework led to 85% accuracy at predicting outcomes, a rate higher than any validated model previously published. And this is based solely on data from the exercise machine. When researchers add basic clinical information about each patient into this framework, he expects the accuracy to surpass 90%. At that point, it will be a very valuable clinical tool.
For heart failure, knowing a patient’s long-term prognosis is especially important. It is common that patients are stable for a long period of time and when negative signs emerge – ejection fraction or VO2 dropping, for instance – it is often too late for most treatments to be effective. But looking at micro data from those exercise machine sensors, doctors can glimpse the distance and see whether certain treatments, like mechanical or biological therapies, should be implemented sooner.
In opening the door to previously inaccessible data, machine learning can support physicians and save the lives of people with heart failure.