Advancing mHealth with artificial intelligence

As more mHealth interventions are deployed, the question of how to inform their design and increase their effectiveness becomes increasingly important. Advances in artificial intelligence have created the opportunity to develop just-in-time adaptive interventions (JITAIs) with decision rules that change based on individual user data. We are developing the methods researchers need to leverage the potential of AI as they design these advanced interventions.

Personalizing Just-in-Time Adaptive Interventions (pJITAIs)

What are JITAIs?

Just-in-Time Adaptive Interventions (JITAIs) are a type of intervention designed to provide support exactly when an individual needs it. Typically delivered via mobile devices, these interventions are tailored based on real-time data about the user’s context, such as their current activity level, location, and time of day. Standard JITAIs operate on fixed, deterministic decision rules that determine when and what type of intervention to deliver.

The Evolution to pJITAIs

Unlike standard JITAIs, personalized JITAIs (pJITAIs) leverage artificial intelligence to continuously learn and update decision rules based on participant data. This adaptive approach allows pJITAIs to dynamically adjust the delivery of interventions, optimizing for maximum engagement and response. By learning from the user’s behavior and context over time, pJITAIs can provide more effective and personalized support.

Reinforcement Learning (RL) Algorithms in pJITAIs

Reinforcement Learning (RL)

Reinforcement Learning (RL) is a branch of artificial intelligence where an algorithm learns to make decisions by receiving and analyzing feedback from its actions. In the context of pJITAIs, RL algorithms use real-time data to continuously improve the JITAI decision rules. The RL algorithm acts by delivering to the user an intervention option, and then based on how the user responds to the selected intervention option, the RL algorithm refines the decision rules to improve engagement and health outcomes.

Benefits of pJITAIs with RL Algorithms

Enhanced Engagement

By providing timely and relevant interventions, pJITAIs keep users engaged and motivated.

Improved Outcomes

Personalized interventions are more likely to result in positive behavior changes and health improvements.

Dynamic Adaptation

RL algorithms enable interventions to evolve with the user’s changing needs and contexts.

Our Pioneering Work in RL for pJITAIs

Our team has been at the forefront of applying RL to pJITAIs. Supported by grants from the NIH/NIDA and the U-M Rogel Cancer Center, we are developing an RL-powered pJITAI to enhance medication adherence among young cancer patients post-hematopoietic stem cell transplantation. Features of this pJITAI include:

  • Automated Learning Algorithm: The RL system continuously learns from data on the young patient, the patient’s care partner and their relationship to decide when and which intervention options are best for the patient’s needs.
  • Real-Time Decision-Making: Decisions regarding when and which Intervention option is best is tailored based on real-time data, ensuring they are delivered at the most effective moments.
  • Continuous Personalization: The algorithm refines the pJITAI decision rules over time, improving the personalization and effectiveness of the interventions.

Featured Resources

Ghosh, S., Guo, Y., Hung, P., Coughlin, L.N., Bonar, E.E., Nahum-Shani, I., Walton, M., & Murphy, S. (2024). reBandit: Random Effects based Online RL algorithm for Reducing Cannabis Use. ArXiv, abs/2402.17739.

Nahum-Shani, I., Greer, Z. M., Trella, A. L., Zhang, K. W., Carpenter, S. M., Rünger, D., Elashoff, D., Murphy, S. A., & Shetty, V. (2024). Optimizing an adaptive digital oral health intervention for promoting oral self-care behaviors: Micro-randomized trial protocol. Contemporary clinical trials, 139, 107464. https://doi.org/10.1016/j.cct.2024.107464

Rabbi, M., Philyaw Kotov, M., Cunningham, R.M., Bonar, E.E., Nahum-Shani, I., Klasnja, P.V., Walton, M.A., & Murphy, S.A. (2018). Toward Increasing Engagement in Substance Use Data Collection: Development of the Substance Abuse Research Assistant App and Protocol for a Microrandomized Trial Using Adolescents and Emerging Adults. JMIR Research Protocols, 7.

Trella, A.L., Zhang, K.W., Nahum-Shani, I., Shetty, V., Doshi-Velez, F., & Murphy, S.A. (2022). Designing Reinforcement Learning Algorithms for Digital Interventions: Pre-Implementation Guidelines. Algorithms, 15.

Join Our Network

Be the first to hear about new resources, opportunities, and advancements related to adaptive interventions and intervention optimization.