Objective: Renal dynamic imaging, as an important tool for assessing renal function, is commonly used to test the perfusion and excretory functions of the kidney. In clinical diagnosis, accurate segmentation of renal regions is crucial for subsequent quantitative analysis and functional assessment. Currently, clinical outlining of renal dynamic renal regions still relies on manual labor. The purpose of this study is to construct an automated, accurate segmentation algorithm model for renal dynamic dual kidney regions. Methods: In this paper, an automated, accurate segmentation algorithm based on a non-local triple attention UNet network structure is proposed. The algorithm utilizes a deep convolutional neural network and a non-local triple attention mechanism for feature extraction and multi-scale fusion of renal dynamic imaging images to achieve accurate segmentation of renal dynamic imaging dual kidney regions. Results: By comparing the segmentation with other segmentation algorithms on the renal dynamic imaging dataset, the experimental results show that the algorithm model of this study is better than the standard Unet and Attention UNet segmentation algorithms in terms of the indicators such as mean Intersection over Union (mIoU), mean Pixel Accuracy (mPA), and so on. Conclusion: The algorithmic model in this study is able to automate the accurate segmentation of the double kidney region in renal dynamic images and demonstrates its effectiveness and robustness in automated segmentation of renal dynamic imaging.
References
[1]
Chen, J.F. (2002) The Value of Renal Dynamic Imaging in the Diagnosis of Renal Function. Journal of Nanhua University, 30, 39-40.
[2]
Li, Y. (2021) Progress in the Application of Imaging Histology in Nuclear Medicine Imaging. Intelligent Health, 7, 36-38.
[3]
Zhang, Y. (2018) Image Engineering. Tsinghua University Press.
[4]
Li, X., Zhang, J., Liu, P., et al. (2021) Current Status and Prospect of Deep Learning Application in Medical Imaging. Journal of Clinical Radiology, 40, 2423-2429.
[5]
Wang, X., Girshick, R., Gupta, A. and He, K. (2018) Non-Local Neural Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 7794-7803. https://doi.org/10.1109/cvpr.2018.00813
[6]
Woo, S., Park, J., Lee, J. and Kweon, I.S. (2018) CBAM: Convolutional Block Attention Module. In: Ferrari, V., Hebert, M., Sminchisescu, C. and Weiss, Y., Eds., Lecture Notes in Computer Science, Springer International Publishing, 3-19. https://doi.org/10.1007/978-3-030-01234-2_1
[7]
Hu, J., Shen, L. and Sun, G. (2018) Squeeze-and-Excitation Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 7132-7141. https://doi.org/10.1109/cvpr.2018.00745
[8]
Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W. and Hu, Q. (2020) ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, 13-19 June 2020, 11531-11539. https://doi.org/10.1109/cvpr42600.2020.01155
[9]
Ronneberger, O., Fischer, P. and Brox, T. (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W. and Frangi, A., Eds., Lecture Notes in Computer Science, Springer International Publishing, 234-241. https://doi.org/10.1007/978-3-319-24574-4_28
[10]
Oktay, O., Schlemper, J., Folgoc, L.L., et al. (2018) Attention U-Net: Learning Where to Look for the Pancreas. arxiv:1804.03999.
[11]
Mahmoudi, S.E., Akhondi-Asl, A., Rahmani, R., Faghih-Roohi, S., Taimouri, V., Sabouri, A., et al. (2010) Web-Based Interactive 2D/3D Medical Image Processing and Visualization Software. Computer Methods and Programs in Biomedicine, 98, 172-182. https://doi.org/10.1016/j.cmpb.2009.11.012