• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science ›› 2024, Vol. 46 ›› Issue (03): 479-487.

• Graphics and Images • Previous Articles     Next Articles

Self-supervised few-shot medical image segmentation with multi-attention mechanism

YAO Yuan-yuan1,LIU Yu-hang1,CHENG Yu-jing1,PENG Meng-xiao1,ZHENG Wen1,2   

  1. (1.College of Computer Science and Technology (College of Data Science),Taiyuan University of Technology,Jinzhong 030600;
    2.Shanxi Engineering Research Centre for Intelligent Data Assisted Treatment,
    Changzhi Medical College,Changzhi 046000,China)
  • Received:2023-09-01 Revised:2023-10-19 Accepted:2024-03-25 Online:2024-03-25 Published:2024-03-15

Abstract: Mainstream fully supervised deep learning segmentation models can achieve good results when trained on abundant labeled data, but the image segmentation in the medical field faces the challenges of high annotation cost and diverse segmentation targets, often lacking sufficient labeled data. The model proposed in this paper incorporates the idea of extracting labels from data through self-supervision, utilizing superpixels to represent image characteristics for image segmentation under conditions of small sample annotation. The introduction of multiple attention mechanisms allows the model to focus more on spatial features of the image. The position attention module and channel attention module aim to fuse multi-scale features within a single image, while the external attention module highlights the connections between different samples. Experiments were conducted on the CHAOS healthy abdominal organ dataset. In the extreme case of the 1-shot, DSC reached 0.76, which is about 3%higher than the baseline result. In addition, this paper explores the significance of few-shot learning by adjusting the number of N-way-K-shot tasks. Under the 7-shot setting, DSC achieves significant improvement, which is within an acceptable range of the segmentation effect based on full supervision based on deep learning.

Key words: few-shot, attention mechanism, self-supervision, prototype network