TY - JOUR
T1 - Evaluating Digital Health Interventions
T2 - Key Questions and Approaches
AU - Murray, Elizabeth
AU - Hekler, Eric B.
AU - Andersson, Gerhard
AU - Collins, Linda M.
AU - Doherty, Aiden
AU - Hollis, Chris
AU - Rivera, Daniel E.
AU - West, Robert
AU - Wyatt, Jeremy C.
N1 - Funding Information:
This 2016 theme issue of the American Journal of Preventive Medicine is supported by funding from the NIH Office of Behavioral and Social Sciences Research (OBSSR) to support the dissemination of research on digital health interventions, methods, and implications for preventive medicine.
Funding Information:
This paper is one of the outputs of two workshops, one supported by the Medical Research Council (MRC)/National Institute for Health Research (NIHR) Methodology Research Program (PI Susan Michie), the OBSSR (William Riley, Director) and the Robert Wood Johnson Foundation (PI Kevin Patrick); and the other by the National Science Foundation (PI Donna Spruitj-Metz, proposal # 1539846).
Funding Information:
Author Aiden Doherty is supported by the British Heart Foundation Centre of Research Excellence at Oxford [grant number RE/13/1/30181].
Publisher Copyright:
© 2016 American Journal of Preventive Medicine
PY - 2016/11/1
Y1 - 2016/11/1
N2 - Digital health interventions have enormous potential as scalable tools to improve health and healthcare delivery by improving effectiveness, efficiency, accessibility, safety, and personalization. Achieving these improvements requires a cumulative knowledge base to inform development and deployment of digital health interventions. However, evaluations of digital health interventions present special challenges. This paper aims to examine these challenges and outline an evaluation strategy in terms of the research questions needed to appraise such interventions. As they are at the intersection of biomedical, behavioral, computing, and engineering research, methods drawn from all of these disciplines are required. Relevant research questions include defining the problem and the likely benefit of the digital health intervention, which in turn requires establishing the likely reach and uptake of the intervention, the causal model describing how the intervention will achieve its intended benefit, key components, and how they interact with one another, and estimating overall benefit in terms of effectiveness, cost effectiveness, and harms. Although RCTs are important for evaluation of effectiveness and cost effectiveness, they are best undertaken only when: (1) the intervention and its delivery package are stable; (2) these can be implemented with high fidelity; and (3) there is a reasonable likelihood that the overall benefits will be clinically meaningful (improved outcomes or equivalent outcomes at lower cost). Broadening the portfolio of research questions and evaluation methods will help with developing the necessary knowledge base to inform decisions on policy, practice, and research.
AB - Digital health interventions have enormous potential as scalable tools to improve health and healthcare delivery by improving effectiveness, efficiency, accessibility, safety, and personalization. Achieving these improvements requires a cumulative knowledge base to inform development and deployment of digital health interventions. However, evaluations of digital health interventions present special challenges. This paper aims to examine these challenges and outline an evaluation strategy in terms of the research questions needed to appraise such interventions. As they are at the intersection of biomedical, behavioral, computing, and engineering research, methods drawn from all of these disciplines are required. Relevant research questions include defining the problem and the likely benefit of the digital health intervention, which in turn requires establishing the likely reach and uptake of the intervention, the causal model describing how the intervention will achieve its intended benefit, key components, and how they interact with one another, and estimating overall benefit in terms of effectiveness, cost effectiveness, and harms. Although RCTs are important for evaluation of effectiveness and cost effectiveness, they are best undertaken only when: (1) the intervention and its delivery package are stable; (2) these can be implemented with high fidelity; and (3) there is a reasonable likelihood that the overall benefits will be clinically meaningful (improved outcomes or equivalent outcomes at lower cost). Broadening the portfolio of research questions and evaluation methods will help with developing the necessary knowledge base to inform decisions on policy, practice, and research.
UR - http://www.scopus.com/inward/record.url?scp=84994138772&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84994138772&partnerID=8YFLogxK
U2 - 10.1016/j.amepre.2016.06.008
DO - 10.1016/j.amepre.2016.06.008
M3 - Editorial
C2 - 27745684
AN - SCOPUS:84994138772
SN - 0749-3797
VL - 51
SP - 843
EP - 851
JO - American journal of preventive medicine
JF - American journal of preventive medicine
IS - 5
ER -