Cross-Task Feedback Fusion GAN for Joint MR-CT Synthesis and Segmentation of Target and Organs-At-Risk

Yiwen Zhang, Liming Zhong, Hai Shu, Zhenhui Dai, Kaiyi Zheng, Zefeiyun Chen, Qianjin Feng, Xuetao Wang, Wei Yang

Research output: Contribution to journalArticlepeer-review

Abstract

The synthesis of computed tomography (CT) images from magnetic resonance imaging (MR) images and segmentation of target and organs-at-risk (OARs) are two important tasks in MR-only radiotherapy treatment planning (RTP). Some methods have been proposed to utilize the paired MR and CT images for MR-CT synthesis or target and OARs segmentation. However, these methods usually handle synthesis and segmentation as two separate tasks, and ignore the inevitable registration errors in paired images after standard registration. In this paper, we propose a cross-task feedback fusion generative adversarial network (CTFF-GAN) for joint MR-CT synthesis and segmentation of target and OARs to enhance each task&#x2019;s performance. Specifically, we propose a cross-task feedback fusion (CTFF) module to feedback the semantic information from the segmentation task to the synthesis task for the anatomical structure correction in synthetic CT images. Besides, we use CT images synthesized from MR images for multi-modal segmentation to eliminate the registration errors. Moreover, we develop a multi-task discriminator to urge the generator to devote more attention to the organ boundaries. Experiments on our nasopharyngeal carcinoma dataset show that CTFF-GAN achieves impressive performance with MAE of 70.69 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 10.50 HU, SSIM of 0.755 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.03, and PSNR of 27.44 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 1.20 dB in synthetic CT, and the mean Dice of 0.783 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.075 in target and OARs segmentation. Our CTFF-GAN outperforms state-of-the-art methods in both the synthesis and segmentation tasks. <italic>Impact Statement</italic>&#x2014;Radiation therapy is a crucial part of cancer treatment, with nearly half of all cancer patients receiving it at some point during their illness. It usually takes a radiation oncologist several hours to delineate the targets and organs-at-risk (OARs) for a radiotherapy treatment planning (RTP). Worse, the inevitable registration errors between computed tomography (CT) images and magnetic resonance imaging (MR) images increase the uncertainty of delineation. Although some deep-learning based segmentation and synthesis methods have been proposed to solve the above-mentioned difficulties respectively, they ignore the potential relationship between the two tasks. The technology proposed in this paper takes the synergy of synthesis and segmentation into account and achieves superior performance in both tasks. Our method can automatically realize MR-CT synthesis and segmentation of targets and OARs only based on MR images in half a minute, which will simplify the workflow of RTP and improve the efficiency of radiation oncologist.

Original languageEnglish (US)
Pages (from-to)1-12
Number of pages12
JournalIEEE Transactions on Artificial Intelligence
DOIs
StateAccepted/In press - 2022

Keywords

  • Computed tomography
  • Feedback fusion mechanism
  • generative adversarial network
  • Generative adversarial networks
  • Image edge detection
  • Image segmentation
  • joint synthesis and segmentation
  • MR-only radiotherapy treatment planning
  • Standards
  • Task analysis
  • Training

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Cross-Task Feedback Fusion GAN for Joint MR-CT Synthesis and Segmentation of Target and Organs-At-Risk'. Together they form a unique fingerprint.

Cite this