Wearable health trackers have become ubiquitous on wrists and gift guides. But how can health professionals transform the firehose of data produced by mobile sensors embedded in patients’ devices into actionable insights that inform treatment plans? University of Michigan Statistics doctoral student and Olympic runner Mason Ferlic is developing new adaptive intervention methods and experimental designs that will one day enable clinicians to assign meaning to metrics as they personalize clinic-delivered treatments. He envisions a new generation of technology-assisted health interventions capable of interpreting patient data to recommend timely, evidence-based treatment decisions that drive better health outcomes for everyone.
Ferlic earned his Master’s in Aerospace Engineering from the University of Michigan in 2016 before returning in 2020, this time to the Department of Statistics, to begin his PhD. He was drawn to statistics by research in the area of intervention optimization, where investigators were applying principles from engineering to systematically improve the quality and efficiency of treatments available for a wide variety of disorders. “I saw this huge connection between the engineering world and breaking down subsystems and components of an intervention to optimize for a particular health outcome,” he said. “I was really focused on learning more about how to leverage data to provide not just associations and correlations but actual recommendations.”
Ferlic’s interest in the science of adaptive interventions was deepened by his athletic endeavors, where he learned to listen to signals within his own body and respond to them with small changes to his diet and training that had a big impact on race-day performance. When intuition wasn’t enough, he had an expert coach at his disposal, who could translate these signals into recommendations. He wondered whether mobile health technology could one day allow others, not just elite athletes, to access this experience.
Around that time, he saw a number of wearable devices entering the market, each capable of measuring and reporting some combination of metrics like heart rate, heart rate variability, physical activity, and sleep quality. When he was offered the opportunity to test a wrist-worn device for athletes, he was excited, but quickly realized its limitations. “It mainly told you numbers, and those numbers were not in the context of what you were doing or what you should do in response,” Ferlic said. While his coach was able to interpret these numbers and tweak his training regimen accordingly, he knew the device wasn’t providing the kind of information that would be useful to the average person. Over the years, he tried other devices, but they all had similar shortcomings. “I became pessimistic about that whole side of technology,” he said. “At the same time, I knew I wanted to be a part of making it work for people.”
When Ferlic connected with d3c co-director Danny Almirall during the first year of his PhD, the affinity between his interests and the d3center’s work became obvious. “I was like, this is it. […] At the time, I didn’t have the language to formulate what the problem was, but once I learned what d3c is doing, I knew it was exactly what I hoped to do.” The problem was that the algorithms running mobile health devices and their applications were too simple to make the data they collected useful to individuals. But there was a major opportunity to increase the utility of this data by integrating it with the healthcare people already receive. What if wearables and other mobile devices could give clinicians access to actionable information about their patients’ day to day health, allowing them to make targeted adjustments to treatment in the role of the expert coach? Unfortunately, the scientists behind these algorithms weren’t yet able to answer key questions about how to make the data useful in practice settings. The methods they would need to investigate these questions simply didn’t exist.
Laying a New Foundation
Today, Ferlic is working to develop the methods researchers need to build intelligent technology-assisted clinical interventions equal in sophistication to the platforms they run on. In the traditional clinical setting, Ferlic explains, the process of adaptation in an AI occurs relatively slowly: “In adaptive interventions where you only interact with a patient through clinical visits, you’re not really able to adapt as soon as possible. There’s a lag between when you can evaluate response and when you can update the treatment. This is because the clinician only has access to the data at distinct clinic visits, which are typically anywhere between two and eight weeks apart. But with the addition of a digital health tool, we can augment in-person monitoring at clinic visits based on more continuous check-ins. This means that we can monitor and sense when a particular event occurs within the life of the patient, which tells the clinician whether they need to intensify, augment, or switch to an alternative treatment.” In order to learn what real-time measures indicate a need for the adaptation of treatment, and to learn how best to respond based on these measures, researchers need a new class of randomized trial designs and associated data-analytic methods.
Consider a weight loss intervention for the treatment of obesity as an example. A typical clinic-delivered adaptive intervention for weight loss may check in on patient progress only during clinic visits marking the end of weeks two, four, and eight of treatment. If at any of these points the patient’s rate of weight loss sits below a certain threshold, the patient will be offered a new intervention. What such an adaptive intervention misses is information from the time between clinic visits—variability in weight between clinic visits, or how quickly someone met the criteria for non-response to the intervention, for example—that could help a clinician determine the best way to alter treatment at the two, four, and eight week time points.
“By incorporating a variable like time to non-response in the decision rule, we can more deeply tailor the adaptive intervention” Ferlic explains. In a weight loss intervention enhanced by event-triggered tailoring, someone who meets the criteria for non-response during week two may need vigorous augmentation to treatment in order to meet their weight loss goal, whereas someone who meets the criteria for response until the middle of week seven may only need minor augmentation to meet their weight loss goal.
Sequential Multiple-Assignment Randomized Trials (SMARTs) can be designed to develop optimized event-triggered adaptive interventions. In concert with new methods, these trials will allow researchers to develop adaptive interventions capable of offering timely recommendations that account for each patient’s context, especially their trajectory of response. Once Ferlic conceptualizes the design principles and analytical methods necessary to process data from this variety of SMART, he aims to build capacity for more complex applications of the event-triggered tailoring concept. Some day, these specialized SMARTs may accommodate multiple tailoring variables, collected in real-time, enabling researchers to optimize adaptive interventions that leverage a combination of digital sensing and user-reported data to supercharge the effectiveness of clinical interventions.
Wherever the science takes him, Ferlic feels certain about a future in which a combination of low-cost wearable tech and evidence-based intervention design makes it easier for everyone to obtain their health and fitness goals. “I am really excited to see the real-world impact of the new methods we are developing at d3c. In the long run, my hope is that invaluable decision-making expertise can be packaged, at least partially, into adaptive algorithms for digital devices, reducing the barrier to quality health coaching for those most in need.”