• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

计算机工程与科学 ›› 2021, Vol. 43 ›› Issue (07): 1273-1282.

• 图形与图像 • 上一篇    下一篇

SAG-Net:用于联合视盘和视杯分割的新型跳过注意力指导网络

蒋芸,高静,王发林   

  1. (西北师范大学计算机科学与工程学院,甘肃 兰州 730070)
  • 收稿日期:2020-03-10 修回日期:2020-06-30 接受日期:2021-07-25 出版日期:2021-07-25 发布日期:2021-08-17
  • 基金资助:
    国家自然科学基金(61962054,61163036);2016年甘肃省科技计划资助自然科学基金(1606RJZA047);2012年度甘肃省高校基本科研业务费专项资金;甘肃省高校研究生导师项目(1201-16);西北师范大学第三期知识与创新工程科研骨干项目(nwnu-kjcxgc-03-67)

SAG-Net: A new skip attention guided network for joint disc and cup segmentation

JIANG Yun,GAO Jing,WANG Fa-lin   

  1. (College of Computer Science and Engineering,Northwest Normal University,Lanzhou 730070,China)
  • Received:2020-03-10 Revised:2020-06-30 Accepted:2021-07-25 Online:2021-07-25 Published:2021-08-17

摘要: 学习特征图语义信息和位置信息对于在视网膜图像分割中产生理想的结果至关重要。最近,卷积神经网络在提取特征图有效信息方面已经表现出了强大的能力,然而,卷积和池化操作会过滤掉一些有用的信息。提出了一种新型跳过注意力指导网络SAG-Net来保存特征图语义和位置信息并指导扩展工作。在SAG-Net中,首先引入了跳过注意力门SAtt模块,将其用作敏感的扩展路径来传递先前特征图的语义信息和位置信息,不仅有助于消除噪声,还进一步减小了背景的负面影响。其次,通过合并图像金字塔保留上下文特征来进一步优化SAG-Net。在Drishti-GS1数据集上,联合视盘和视杯分割任务表明了SAG-Net的有效性。综合结果表明,SAG-Net优于原始的U-Net方法以及其他用于视盘和视杯分割的最新的方法。

关键词: 卷积神经网络, 图像分割, 跳过注意力门, 扩展路径

Abstract: Learning the semantic information and location information of feature map is essential to produce ideal results in retinal image segmentation. Recently, convolutional neural networks have shown strong capabilities in extracting valid information from feature maps. However, convolution and pooling operations filter out some useful information. This paper proposes a new skip attention guided network (SAG-Net) to save feature map's semantic and location information and guide the expansion work. In SAG-Net, the Skip Attention Gate (SAtt) module is first introduced, which is used as a sensitive extension path to pass the semantic information and location information of previous feature maps, not only helps eliminate noise, but further reduces the negative effects of the background. Secondly, the SAG-Net model is further optimized by merging image pyramids to preserve contextual features. On the Drishti-GS1 dataset, the joint disc and cup segmentation task proves the effectiveness of our proposed method. Comprehensive results show that the proposed method is superior to the original U-Net method and other recent methods for disc and cup segmentation.


Key words: convolutional neural network, image segmentation, skip attention gate, extension path