Improving the Ability of Deep Neural Networks to Use Information from Multiple Views in Breast Cancer Screening

Nan Wu, Stanisław Jastrzębski, Jungkyu Park, Linda Moy, Kyunghyun Cho, Krzysztof J. Geras

Research output: Contribution to journalConference articlepeer-review

Abstract

In breast cancer screening, radiologists make the diagnosis based on images that are taken from two angles. Inspired by this, we seek to improve the performance of deep neural networks applied to this task by encouraging the model to use information from both views of the breast. First, we took a closer look at the training process and observed an imbalance between learning from the two views. In particular, we observed that layers processing one of the views have parameters with larger gradients in magnitude, and contribute more to the overall loss reduction. Next, we tested several methods targeted at utilizing both views more equally in training. We found that using the same weights to process both views, or using modality dropout, leads to a boost in performance. Looking forward, our results indicate improving learning dynamics as a promising avenue for improving utilization of multiple views in deep neural networks for medical diagnosis.

Original languageEnglish (US)
Pages (from-to)827-842
Number of pages16
JournalProceedings of Machine Learning Research
Volume121
StatePublished - 2020
Event3rd Conference on Medical Imaging with Deep Learning, MIDL 2020 - Virtual, Online, Canada
Duration: Jul 6 2020Jul 8 2020

Keywords

  • Breast cancer screening
  • deep neural networks
  • multimodal learning
  • multiview learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Improving the Ability of Deep Neural Networks to Use Information from Multiple Views in Breast Cancer Screening'. Together they form a unique fingerprint.

Cite this