TY - JOUR
T1 - A deep learning approach to re-create raw full-field digital mammograms for breast density and texture analysis
AU - Shu, Hai
AU - Chiang, Tingyu
AU - Wei, Peng
AU - Do, Kim Anh
AU - Lesslie, Michele D.
AU - Cohen, Ethan O.
AU - Srinivasan, Ashmitha
AU - Moseley, Tanya W.
AU - Chang Sen, Lauren Q.
AU - Leung, Jessica W.T.
AU - Dennison, Jennifer B.
AU - Hanash, Sam M.
AU - Weaver, Olena O.
N1 - Funding Information:
Supported in part by grants from National Institutes of Health/National Cancer Institute Cancer Center Support Grant (P30 CA016672), Little Green Book Foundation, Center for Global Early Detection at MD Anderson, and McCombs Institute at MD Anderson.
Funding Information:
Supported in part by grants from National Institutes of Health/National Cancer Institute Cancer Center Support Grant (P30 CA016672), Little Green Book Foundation, Center for Global Early Detection at MD Anderson, and McCombs Institute at MD Anderson. We are grateful to Scientific Publications, Research Medical Library, The University of Texas MD Anderson Cancer Center, for editing assistance.
Funding Information:
Activities related to the present article: institution received National Institutes of Health (NIH)/National Cancer Institute (NCI) Cancer Center Support Grant (P30 CA016672); institution supported by Little Green Book Foundation, Center for Global Early Detection at MD Anderson, and McCombs Institute at MD Anderson. Activities not related to the present article: disclosed no relevant relationships. Other relationships: disclosed no relevant relationships. P.W. Activities related to the present article: institution received NIH/NCI Cancer Center Support Grant (P30 CA016672). Activities not related to the present article: disclosed no relevant relationships. K.A.D. disclosed no relevant relationships. M.D.L. disclosed no relevant relationships. E.O.C. disclosed no relevant relationships. A.S. disclosed no relevant relationships. T.W.M. Activities related to the present article: institution received NIH/NCI Cancer Center Support Grant (P30 CA016672). Activities not related to the present article: author is paid medical consultant for Hologic and Merit Medical. Other relationships: disclosed no relevant relationships. L.C.S. Activities related to the present article: institution received NIH/NCI Cancer Center Support Grant (P30 CA016672); institution received support from Little Green Book Foundation, Center for Global Early Detection at MD Anderson, and McCombs Institute at MD Anderson. Activities not related to the present article: disclosed no relevant relationships. Other relationships: disclosed no relevant relationships. J.W.T.L. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: author paid by Fujifilm and GE Healthcare for lectures; author has stock/stock options in Subtle Medical (start-up company, stock/stock options have no monetary value at this time). Other relationships: disclosed no relevant relationships. J.B.D. Activities related to the present article: institution supported by Little Green Book Foundation (has supported breast cancer mammography clinical research for the early detection, the MERIT program). Activities not related to the present article: employed by MD Anderson. Other relationships: disclosed no relevant relationships. S.M.H. disclosed no relevant relationships. O.O.W. Activities related to the present article: institution received NIH/NCI Cancer Center Support Grant (P30 CA016672); institution received support from Little Green Book Foundation (sponsor of the patient cohort retrospectively used in the study). Activities not related to the present article: disclosed no relevant relationships. Other relationships: disclosed no relevant relationships.
Publisher Copyright:
© RSNA, 2021.
PY - 2021
Y1 - 2021
N2 - Purpose: To develop a computational approach to re-create rarely stored for-processing (raw) digital mammograms from routinely stored for-presentation (processed) mammograms. Materials and Methods: In this retrospective study, pairs of raw and processed mammograms collected in 884 women (mean age, 57 years ± 10 [standard deviation]; 3713 mammograms) from October 5, 2017, to August 1, 2018, were examined. Mammograms were split 3088 for training and 625 for testing. A deep learning approach based on a U-Net convolutional network and kernel regression was developed to estimate the raw images. The estimated raw images were compared with the originals by four image error and similarity metrics, breast density calculations, and 29 widely used texture features. Results: In the testing dataset, the estimated raw images had small normalized mean absolute error (0.022 ± 0.015), scaled mean absolute error (0.134 ± 0.078) and mean absolute percentage error (0.115 ± 0.059), and a high structural similarity index (0.986 ± 0.007) for the breast portion compared with the original raw images. The estimated and original raw images had a strong correlation in breast density percentage (Pearson r = 0.946) and a strong agreement in breast density grade (Cohen k = 0.875). The estimated images had satisfactory correlations with the originals in 23 texture features (Pearson r ≥ 0.503 or Spearman r ≥ 0.705) and were well complemented by processed images for the other six features. Conclusion: This deep learning approach performed well in re-creating raw mammograms with strong agreement in four image evaluation metrics, breast density, and the majority of 29 widely used texture features.
AB - Purpose: To develop a computational approach to re-create rarely stored for-processing (raw) digital mammograms from routinely stored for-presentation (processed) mammograms. Materials and Methods: In this retrospective study, pairs of raw and processed mammograms collected in 884 women (mean age, 57 years ± 10 [standard deviation]; 3713 mammograms) from October 5, 2017, to August 1, 2018, were examined. Mammograms were split 3088 for training and 625 for testing. A deep learning approach based on a U-Net convolutional network and kernel regression was developed to estimate the raw images. The estimated raw images were compared with the originals by four image error and similarity metrics, breast density calculations, and 29 widely used texture features. Results: In the testing dataset, the estimated raw images had small normalized mean absolute error (0.022 ± 0.015), scaled mean absolute error (0.134 ± 0.078) and mean absolute percentage error (0.115 ± 0.059), and a high structural similarity index (0.986 ± 0.007) for the breast portion compared with the original raw images. The estimated and original raw images had a strong correlation in breast density percentage (Pearson r = 0.946) and a strong agreement in breast density grade (Cohen k = 0.875). The estimated images had satisfactory correlations with the originals in 23 texture features (Pearson r ≥ 0.503 or Spearman r ≥ 0.705) and were well complemented by processed images for the other six features. Conclusion: This deep learning approach performed well in re-creating raw mammograms with strong agreement in four image evaluation metrics, breast density, and the majority of 29 widely used texture features.
UR - http://www.scopus.com/inward/record.url?scp=85113815159&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113815159&partnerID=8YFLogxK
U2 - 10.1148/ryai.2021200097
DO - 10.1148/ryai.2021200097
M3 - Article
AN - SCOPUS:85113815159
SN - 2638-6100
VL - 3
JO - Radiology: Artificial Intelligence
JF - Radiology: Artificial Intelligence
IS - 4
M1 - e200097
ER -