Abstract
Medical image segmentation is a key initial step in several therapeutic applications. While most of the automatic segmentation models are supervised, which require a well-annotated paired dataset, we introduce a novel annotation-free pipeline to perform segmentation of COVID-19 CT images. Our pipeline consists of three main subtasks: automatically generating a 3D pseudo-mask in self-supervised mode using a generative adversarial network (GAN), leveraging the quality of the pseudo-mask, and building a multi-objective segmentation model to predict lesions. Our proposed 3D GAN architecture removes infected regions from COVID-19 images and generates synthesized healthy images while keeping the 3D structure of the lung the same. Then, a 3D pseudo-mask is generated by subtracting the synthesized healthy images from the original COVID-19 CT images. We enhanced pseudo-masks using a contrastive learning approach to build a region-aware segmentation model to focus more on the infected area. The final segmentation model can be used to predict lesions in COVID-19 CT images without any manual annotation at the pixel level. We show that our approach outperforms the existing state-of-the-art unsupervised and weakly-supervised segmentation techniques on three datasets by a reasonable margin. Specifically, our method improves the segmentation results for the CT images with low infection by increasing sensitivity by 20% and the dice score up to 4%. The proposed pipeline overcomes some of the major limitations of existing unsupervised segmentation approaches and opens up a novel horizon for different applications of medical image segmentation.
【저자키워드】 COVID-19, generative adversarial network, Infection Segmentation, Contrastive learning, Self-supervised segmentation, 【초록키워드】 Infection, lung, GAN, sensitivity, Region, therapeutic, dataset, 3D structure, predict, margin, lesion, lesions, Final, limitation, while, approach, initial, IMPROVE, healthy, generate, can be used, overcome, automatically, build, outperform, 【제목키워드】 GAN,