TY - GEN
T1 - Microscopy Image Segmentation via Point and Shape Regularized Data Synthesis
AU - Li, Shijie
AU - Ren, Mengwei
AU - Ach, Thomas
AU - Gerig, Guido
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Current deep learning-based approaches for the segmentation of microscopy images heavily rely on large amount of training data with dense annotation, which is highly costly and laborious in practice. Compared to full annotation where the complete contour of objects is depicted, point annotations, specifically object centroids, are much easier to acquire and still provide crucial information about the objects for subsequent segmentation. In this paper, we assume access to point annotations only during training and develop a unified pipeline for microscopy image segmentation using synthetically generated training data. Our framework includes three stages: (1) it takes point annotations and samples a pseudo dense segmentation mask constrained with shape priors; (2) with an image generative model trained in an unpaired manner, it translates the mask to a realistic microscopy image regularized by object level consistency; (3) the pseudo masks along with the synthetic images then constitute a pairwise dataset for training an ad-hoc segmentation model. On the public MoNuSeg dataset, our synthesis pipeline produces more diverse and realistic images than baseline models while maintaining high coherence between input masks and generated images. When using the identical segmentation backbones, the models trained on our synthetic dataset significantly outperform those trained with pseudo-labels or baseline-generated images. Moreover, our framework achieves comparable results to models trained on authentic microscopy images with dense labels, demonstrating its potential as a reliable and highly efficient alternative to labor-intensive manual pixel-wise annotations in microscopy image segmentation. The code can be accessed through https://github.com/CJLee94/Points2Image.
AB - Current deep learning-based approaches for the segmentation of microscopy images heavily rely on large amount of training data with dense annotation, which is highly costly and laborious in practice. Compared to full annotation where the complete contour of objects is depicted, point annotations, specifically object centroids, are much easier to acquire and still provide crucial information about the objects for subsequent segmentation. In this paper, we assume access to point annotations only during training and develop a unified pipeline for microscopy image segmentation using synthetically generated training data. Our framework includes three stages: (1) it takes point annotations and samples a pseudo dense segmentation mask constrained with shape priors; (2) with an image generative model trained in an unpaired manner, it translates the mask to a realistic microscopy image regularized by object level consistency; (3) the pseudo masks along with the synthetic images then constitute a pairwise dataset for training an ad-hoc segmentation model. On the public MoNuSeg dataset, our synthesis pipeline produces more diverse and realistic images than baseline models while maintaining high coherence between input masks and generated images. When using the identical segmentation backbones, the models trained on our synthetic dataset significantly outperform those trained with pseudo-labels or baseline-generated images. Moreover, our framework achieves comparable results to models trained on authentic microscopy images with dense labels, demonstrating its potential as a reliable and highly efficient alternative to labor-intensive manual pixel-wise annotations in microscopy image segmentation. The code can be accessed through https://github.com/CJLee94/Points2Image.
UR - http://www.scopus.com/inward/record.url?scp=85192878899&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85192878899&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-58171-7_3
DO - 10.1007/978-3-031-58171-7_3
M3 - Conference contribution
AN - SCOPUS:85192878899
SN - 9783031581700
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 23
EP - 32
BT - Data Augmentation, Labelling, and Imperfections - 3rd MICCAI Workshop, DALI 2023 Held in Conjunction with MICCAI 2023, Proceedings
A2 - Xue, Yuan
A2 - Chen, Chen
A2 - Chen, Chao
A2 - Zuo, Lianrui
A2 - Liu, Yihao
PB - Springer Science and Business Media Deutschland GmbH
T2 - 3rd International Workshop on Data Augmentation, Labeling, and Imperfections, DALI 2023 in conjunction with the 26th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2023
Y2 - 12 October 2023 through 12 October 2023
ER -