TY - CONF
T1 - Reproducibility in machine learning for health
AU - McDermott, Matthew B.A.
AU - Wang, Shirly
AU - Marinsek, Nikki
AU - Ranganath, Rajesh
AU - Ghassemi, Marzyeh
AU - Foschini, Luca
N1 - Funding Information:
Additionally, this work was funded in part by National Institutes of Health: National Institutes of Mental Health grant P50-MH106933, as well as by University of Toronto CIFAR Chair support.
Funding Information:
This paper benefited substantially from the help of many people. Most notably, Bret Nestor, Amy Lu, Denny Wu, Elena Sergevea, and Di Jin all helped annotate papers for our analysis. Additionally, this work was funded in part by National Institutes of Health: National Institutes of Mental Health grant P50-MH106933, as well as by University of Toronto CIFAR Chair support.
Publisher Copyright:
© RML@ICLR 2019 Workshop - Reproducibility in Machine Learning. All Rights Reserved.
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Machine learning algorithms designed to characterize, monitor, and intervene on human health (ML4H) are expected to perform safely and reliably when operating at scale, potentially outside strict human supervision. This requirement warrants a stricter attention to issues of reproducibility than other fields of machine learning. In this work, we conduct a systematic evaluation of over 100 recently published ML4H research papers along several dimensions related to reproducibility. We find that the field of ML4H compares poorly to more established machine learning fields, particularly concerning data and code accessibility. Finally, drawing from success in other fields of science, we propose recommendations to data providers, academic publishers, and the ML4H research community in order to promote reproducible research moving forward.
AB - Machine learning algorithms designed to characterize, monitor, and intervene on human health (ML4H) are expected to perform safely and reliably when operating at scale, potentially outside strict human supervision. This requirement warrants a stricter attention to issues of reproducibility than other fields of machine learning. In this work, we conduct a systematic evaluation of over 100 recently published ML4H research papers along several dimensions related to reproducibility. We find that the field of ML4H compares poorly to more established machine learning fields, particularly concerning data and code accessibility. Finally, drawing from success in other fields of science, we propose recommendations to data providers, academic publishers, and the ML4H research community in order to promote reproducible research moving forward.
UR - http://www.scopus.com/inward/record.url?scp=85071324959&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071324959&partnerID=8YFLogxK
M3 - Paper
T2 - 2019 Reproducibility in Machine Learning, RML@ICLR 2019 Workshop
Y2 - 6 May 2019
ER -