全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

MFFNet:基于多尺度特征有效融合的息肉分割网络
MFFNet: Multi-Scale Feature Effective Fusion Network for Polyp Segmentation

DOI: 10.12677/pm.2025.151023, PP. 198-210

Keywords: 息肉分割,多尺度特征,特征融合,注意力机制
Polyp Segmentation
, Multi-Scale Feature, Feature Fusion, Attention Mechanism

Full-Text   Cite this paper   Add to My Lib

Abstract:

息肉的准确分割对结直肠癌的治疗具有重要意义。虽然现有的方法已经取得了良好的分割效果,但仍然存在一些挑战。为此,我们提出了一个新的多尺度特征有效融合网络(MFFNet),用于精确分割息肉。具体来说,考虑到息肉的尺寸差异,我们使用改进的Pvt-v2作为编码器(TC编码器,TC encoder),提取丰富的多尺度特征。然后,应用通道–空间模块(Channel Spatial Module, CSM)来抑制背景信息,防止信息的冗余。为了使多尺度特征进行有效融合,我们提出了融合注意力模块(Fusion Attention Block, FAB),该模块充分学习多层次特征之间的上下文相关性,以进一步精确定位息肉区域。在5个公共数据集上的实验表明,我们的MFFNet比其他方法具有更好的学习和泛化能力。
Accurate segmentation of polyps is important in the management of colorectal cancer. Although existing methods have achieved good segmentation results, there are still some challenges. To this end, we propose a new Multi-Scale Feature Effective Fusion Network (MFFNet) for accurate polyp segmentation. Specifically, considering the size difference of polyps, we use the improved Pvt-v2 as an encoder (TC encoder) to extract rich multi-scale features. Then, the Channel-Spatial Module (CSM) is applied to minimize background interference and prevent the redundancy of information. To enable effective fusion of multi-scale features, we propose the Fusion Attention Block (FAB), which fully learns the contextual correlations between multi-level features to further pinpoint the polyp region. Experiments on five public datasets show that our MFFNet has better learning and generalization capabilities than other methods.

References

[1]  Bernal, J., Sánchez, J. and Vilariño, F. (2012) Towards Automatic Polyp Detection with a Polyp Appearance Model. Pattern Recognition, 45, 3166-3182.
https://doi.org/10.1016/j.patcog.2012.03.002
[2]  Navarro, M., Nicolas, A., Ferrandez, A. and Lanas, A. (2017) Colorectal Cancer Population Screening Programs Worldwide in 2016: An Update. World Journal of Gastroenterology, 23, 3632-3642.
https://doi.org/10.3748/wjg.v23.i20.3632
[3]  Ronneberger, O., Fischer, P. and Brox, T. (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted InterventionMICCAI 2015, Munich, 5-9 October 2015, 234-241.
https://doi.org/10.1007/978-3-319-24574-4_28
[4]  Jha, D., Smedsrud, P.H., Riegler, M.A., Johansen, D., Lange, T.D., Halvorsen, P., et al. (2019) Resunet++: An Advanced Architecture for Medical Image Segmentation. 2019 IEEE International Symposium on Multimedia (ISM), San Diego, 9-11 December 2019, 225-2255.
https://doi.org/10.1109/ism46123.2019.00049
[5]  Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N. and Liang, J. (2020) UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Transactions on Medical Imaging, 39, 1856-1867.
https://doi.org/10.1109/tmi.2019.2959609
[6]  Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al. (2020) An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv: 2010.11929.
[7]  Qin, Y., Kamnitsas, K., Ancha, S., Nanavati, J., Cottrell, G., Criminisi, A., et al. (2018) Autofocus Layer for Semantic Segmentation. Medical Image Computing and Computer Assisted InterventionMICCAI 2018, Granada, 16-20 September 2018, 603-611.
https://doi.org/10.1007/978-3-030-00931-1_69
[8]  Oktay, O., Schlemper, J., Folgoc, L.L., et al. (2018) Attention U-Net: Learning Where to Look for the Pancreas. arXiv: 1804.03999.
[9]  Tang, F., Xu, Z., Huang, Q., Wang, J., Hou, X., Su, J., et al. (2023) Duat: Dual-Aggregation Transformer Network for Medical Image Segmentation. In: Lecture Notes in Computer Science, Granada, 16-20 September 2018, 343-356.
https://doi.org/10.1007/978-981-99-8469-5_27
[10]  Dong, B., Wang, W., Fan, D.-P., et al. (2021) Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers. arXiv: 2108.06932.
[11]  Zhang, R., Li, G., Li, Z., Cui, S., Qian, D. and Yu, Y. (2020) Adaptive Context Selection for Polyp Segmentation. Medical Image Computing and Computer Assisted InterventionMICCAI 2020, Lima, 4-8 October 2020, 253-262.
https://doi.org/10.1007/978-3-030-59725-2_25
[12]  Patel, K., Bur, A.M. and Wang, G. (2021) Enhanced U-Net: A Feature Enhancement Network for Polyp Segmentation. 2021 18th Conference on Robots and Vision (CRV), Burnaby, 26-28 May 2021, 181-188.
https://doi.org/10.1109/crv52889.2021.00032
[13]  Fan, D., Ji, G., Zhou, T., Chen, G., Fu, H., Shen, J., et al. (2020) PraNet: Parallel Reverse Attention Network for Polyp Segmentation. Medical Image Computing and Computer Assisted InterventionMICCAI 2020, Lima, 4-8 October 2020, 263-273.
https://doi.org/10.1007/978-3-030-59725-2_26
[14]  Kim, T., Lee, H. and Kim, D. (2021) UACANet: Uncertainty Augmented Context Attention for Polyp Segmentation. Proceedings of the 29th ACM International Conference on Multimedia, Virtual, 20-24 October 2021, 2167-2175.
https://doi.org/10.1145/3474085.3475375
[15]  Qiu, Z., Wang, Z., Zhang, M., Xu, Z., Fan, J. and Xu, L. (2022) BDG-Net: Boundary Distribution Guided Network for Accurate Polyp Segmentation. Medical Imaging 2022: Image Processing, San Diego, 4 April 2022, Article ID: 1203230.
https://doi.org/10.1117/12.2606785
[16]  Lin, Y., Wu, J., Xiao, G., Guo, J., Chen, G. and Ma, J. (2022) BSCA-Net: Bit Slicing Context Attention Network for Polyp Segmentation. Pattern Recognition, 132, Article ID: 108917.
https://doi.org/10.1016/j.patcog.2022.108917
[17]  Wang, J., Huang, Q., Tang, F., Meng, J., Su, J. and Song, S. (2022) Stepwise Feature Fusion: Local Guides Global. Medical Image Computing and Computer Assisted InterventionMICCAI 2022, Singapore, 18-22 September 2022, 110-120.
https://doi.org/10.1007/978-3-031-16437-8_11
[18]  Wang, W., Xie, E., Li, X., et al. (2021) Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, 10-17 October 2021, 548-558.
https://doi.org/10.1109/ICCV48922.2021.00061
[19]  Zhang, Y., Liu, H. and Hu, Q. (2021) TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation. Medical Image Computing and Computer Assisted InterventionMICCAI 2021, Strasbourg, 27 September-1 October 2021, 14-24.
https://doi.org/10.1007/978-3-030-87193-2_2
[20]  Sanderson, E. and Matuszewski, B.J. (2022) FCN-Transformer Feature Fusion for Polyp Segmentation. Medical Image Understanding and Analysis, Cambridge, 27-29 July 2022, 892-907.
https://doi.org/10.1007/978-3-031-12053-4_65
[21]  Fitzgerald, K. and Matuszewski, B. (2023) FCB-SwinV2 Transformer for Polyp Segmentation. arXiv: 2302.01027.
[22]  Wang, J., Tian, S., Yu, L., Zhou, Z., Wang, F. and Wang, Y. (2023) HIGF-Net: Hierarchical Information-Guided Fusion Network for Polyp Segmentation Based on Transformer and Convolution Feature Learning. Computers in Biology and Medicine, 161, Article ID: 107038.
https://doi.org/10.1016/j.compbiomed.2023.107038
[23]  Liu, F., Hua, Z., Li, J. and Fan, L. (2023) MFBGR: Multi-Scale Feature Boundary Graph Reasoning Network for Polyp Segmentation. Engineering Applications of Artificial Intelligence, 123, Article ID: 106213.
https://doi.org/10.1016/j.engappai.2023.106213
[24]  Hu, J., Shen, L. and Sun, G. (2018) Squeeze-and-Excitation Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 7132-7141.
[25]  Woo, S., Park, J., Lee, J. and Kweon, I.S. (2018) CBAM: Convolutional Block Attention Module. Computer VisionECCV 2018, Munich, 8-14 September 2018, 3-19.
https://doi.org/10.1007/978-3-030-01234-2_1
[26]  Zhan, B., Song, E., Liu, H., Gong, Z., Ma, G. and Hung, C. (2023) CfNet: A Medical Image Segmentation Method Using the Multi-View Attention Mechanism and Adaptive Fusion Strategy. Biomedical Signal Processing and Control, 79, Article ID: 104112.
https://doi.org/10.1016/j.bspc.2022.104112
[27]  Wei, J., Hu, Y., Zhang, R., Li, Z., Zhou, S.K. and Cui, S. (2021) Shallow Attention Network for Polyp Segmentation. Medical Image Computing and Computer Assisted InterventionMICCAI 2021, Strasbourg, 27 September-1 October 2021, 699-708.
https://doi.org/10.1007/978-3-030-87193-2_66
[28]  Wang, W., Xie, E., Li, X., Fan, D., Song, K., Liang, D., et al. (2022) PVT V2: Improved Baselines with Pyramid Vision Transformer. Computational Visual Media, 8, 415-424.
https://doi.org/10.1007/s41095-022-0274-8
[29]  Wei, J., Wang, S. and Huang, Q. (2020) F³Net: Fusion, Feedback and Focus for Salient Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 12321-12328.
https://doi.org/10.1609/aaai.v34i07.6916
[30]  Jha, D., Smedsrud, P.H., Riegler, M.A., Halvorsen, P., de Lange, T., Johansen, D., et al. (2019) Kvasir-SEG: A Segmented Polyp Dataset. MultiMedia Modeling, Daejeon, 5-8 January 2020, 451-462.
https://doi.org/10.1007/978-3-030-37734-2_37
[31]  Bernal, J., Sánchez, F.J., Fernández-Esparrach, G., Gil, D., Rodríguez, C. and Vilariño, F. (2015) WM-DOVA Maps for Accurate Polyp Highlighting in Colonoscopy: Validation vs. Saliency Maps from Physicians. Computerized Medical Imaging and Graphics, 43, 99-111.
https://doi.org/10.1016/j.compmedimag.2015.02.007
[32]  Vázquez, D., Bernal, J., Sánchez, F.J., Fernández-Esparrach, G., López, A.M., Romero, A., et al. (2017) A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images. Journal of Healthcare Engineering, 2017, Article ID: 4037190.
https://doi.org/10.1155/2017/4037190
[33]  Tajbakhsh, N., Gurudu, S.R. and Liang, J. (2016) Automated Polyp Detection in Colonoscopy Videos Using Shape and Context Information. IEEE Transactions on Medical Imaging, 35, 630-644.
https://doi.org/10.1109/tmi.2015.2487997
[34]  Silva, J., Histace, A., Romain, O., Dray, X. and Granado, B. (2013) Toward Embedded Detection of Polyps in WCE Images for Early Diagnosis of Colorectal Cancer. International Journal of Computer Assisted Radiology and Surgery, 9, 283-293.
https://doi.org/10.1007/s11548-013-0926-3
[35]  Cheng, M. and Fan, D. (2021) Structure-Measure: A New Way to Evaluate Foreground Maps. International Journal of Computer Vision, 129, 2622-2638.
https://doi.org/10.1007/s11263-021-01490-8
[36]  Achanta, R., Hemami, S., Estrada, F. and Susstrunk, S. (2009) Frequency-Tuned Salient Region Detection. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, 20-25 June 2009, 1597-1604.
https://doi.org/10.1109/cvpr.2009.5206596
[37]  Fan, D., Gong, C., Cao, Y., Ren, B., Cheng, M. and Borji, A. (2018) Enhanced-Alignment Measure for Binary Foreground Map Evaluation. arXiv: 1805.10421.
[38]  Perazzi, F., Krahenbuhl, P., Pritch, Y. and Hornung, A. (2012) Saliency Filters: Contrast Based Filtering for Salient Region Detection. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, 16-21 June 2012, 733-740.
https://doi.org/10.1109/cvpr.2012.6247743
[39]  Sun, W. and Wang, R. (2018) Fully Convolutional Networks for Semantic Segmentation of Very High Resolution Remotely Sensed Images Combined with DSM. IEEE Geoscience and Remote Sensing Letters, 15, 474-478.
https://doi.org/10.1109/lgrs.2018.2795531
[40]  Zhao, X., Zhang, L. and Lu, H. (2021) Automatic Polyp Segmentation via Multi-Scale Subtraction Network. Medical Image Computing and Computer Assisted InterventionMICCAI 2021, Strasbourg, 27 September-1 October 2021, 120-130.
https://doi.org/10.1007/978-3-030-87193-2_12
[41]  Yin, Z., Liang, K., Ma, Z. and Guo, J. (2022) Duplex Contextual Relation Network for Polyp Segmentation. 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, 28-31 March 2022, 1-5.
https://doi.org/10.1109/isbi52829.2022.9761402
[42]  Zhou, T., Zhou, Y., He, K., Gong, C., Yang, J., Fu, H., et al. (2023) Cross-Level Feature Aggregation Network for Polyp Segmentation. Pattern Recognition, 140, Article ID: 109555.
https://doi.org/10.1016/j.patcog.2023.109555

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133