TY - JOUR
T1 - Cross-Task Feedback Fusion GAN for Joint MR-CT Synthesis and Segmentation of Target and Organs-at-Risk
AU - Zhang, Yiwen
AU - Zhong, Liming
AU - Shu, Hai
AU - Dai, Zhenhui
AU - Zheng, Kaiyi
AU - Chen, Zefeiyun
AU - Feng, Qianjin
AU - Wang, Xuetao
AU - Yang, Wei
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2023/10/1
Y1 - 2023/10/1
N2 - The synthesis of computed tomography (CT) images from magnetic resonance imaging (MR) images and segmentation of target and organs-at-risk (OARs) are two important tasks in MR-only radiotherapy treatment planning (RTP). Some methods have been proposed to utilize the paired MR and CT images for MR-CT synthesis or target and OARs segmentation. However, these methods usually handle synthesis and segmentation as two separate tasks, and ignore the inevitable registration errors in paired images after standard registration. In this article, we propose a cross-task feedback fusion generative adversarial network (CTFF-GAN) for joint MR-CT synthesis and segmentation of target and OARs to enhance each task's performance. Specifically, we propose a cross-task feedback fusion (CTFF) module to feedback the semantic information from the segmentation task to the synthesis task for the anatomical structure correction in synthetic CT images. Besides, we use CT images synthesized from MR images for multimodal segmentation to eliminate the registration errors. Moreover, we develop a multitask discriminator to urge the generator to devote more attention to the organ boundaries. Experiments on our nasopharyngeal carcinoma dataset show that CTFF-GAN achieves impressive performance with MAE of 70.69 ± 10.50 HU, SSIM of 0.755 ± 0.03, and PSNR of 27.44 ± 1.20 dB in synthetic CT, and the mean dice of 0.783 ± 0.075 in target and OARs segmentation. Our CTFF-GAN outperforms state-of-the-art methods in both the synthesis and segmentation tasks.
AB - The synthesis of computed tomography (CT) images from magnetic resonance imaging (MR) images and segmentation of target and organs-at-risk (OARs) are two important tasks in MR-only radiotherapy treatment planning (RTP). Some methods have been proposed to utilize the paired MR and CT images for MR-CT synthesis or target and OARs segmentation. However, these methods usually handle synthesis and segmentation as two separate tasks, and ignore the inevitable registration errors in paired images after standard registration. In this article, we propose a cross-task feedback fusion generative adversarial network (CTFF-GAN) for joint MR-CT synthesis and segmentation of target and OARs to enhance each task's performance. Specifically, we propose a cross-task feedback fusion (CTFF) module to feedback the semantic information from the segmentation task to the synthesis task for the anatomical structure correction in synthetic CT images. Besides, we use CT images synthesized from MR images for multimodal segmentation to eliminate the registration errors. Moreover, we develop a multitask discriminator to urge the generator to devote more attention to the organ boundaries. Experiments on our nasopharyngeal carcinoma dataset show that CTFF-GAN achieves impressive performance with MAE of 70.69 ± 10.50 HU, SSIM of 0.755 ± 0.03, and PSNR of 27.44 ± 1.20 dB in synthetic CT, and the mean dice of 0.783 ± 0.075 in target and OARs segmentation. Our CTFF-GAN outperforms state-of-the-art methods in both the synthesis and segmentation tasks.
KW - Feedback fusion mechanism
KW - MR-only radiotherapy treatment planning (RTP)
KW - generative adversarial network
KW - joint synthesis and segmentation
UR - http://www.scopus.com/inward/record.url?scp=85133733196&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85133733196&partnerID=8YFLogxK
U2 - 10.1109/TAI.2022.3187388
DO - 10.1109/TAI.2022.3187388
M3 - Article
AN - SCOPUS:85133733196
SN - 2691-4581
VL - 4
SP - 1246
EP - 1257
JO - IEEE Transactions on Artificial Intelligence
JF - IEEE Transactions on Artificial Intelligence
IS - 5
ER -