%0 Journal Article
%T 基于改进Mamba的医学图像分割模型
Medical Image Segmentation Model Based on Improved Mamba
%A 高博艺
%A 丁学明
%A 胡鸿翔
%A 丁雪峰
%J Modeling and Simulation
%P 515-523
%@ 2324-870X
%D 2025
%I Hans Publishing
%R 10.12677/mos.2025.143242
%X 在医学图像分割任务中,针对传统U型网络在膀胱肿瘤和视网膜眼底MRI图像分割中在处理复杂结构和细节上分割精度差的问题,本研究提出了一种改进的U-Net网络模型——Akmamba-Net。该模型结合AKConv和Mamba-out模块,有效提高了模型的特征提取能力。AKConv模块通过引入卷积操作与空间重采样机制,增强了网络的适用性和灵活性,尤其是在处理形状不规则的肿瘤边界时。Mamba-out模块则通过优化特征融合和增强细节信息,进一步提升了模型的分割精度。实验结果表明,Akmamba-Net网络在视网膜眼底和膀胱肿瘤MRI图像分割任务中,Precision、Dice系数、IoU指标分别达到了97.2%、82.5%、71.9%和89.64%、89.98%、81.75%,与U-Net和其他主流模型相比显著提高了分割的准确性,能够有效地提高视网膜眼底、膀胱肿瘤的分割精度,满足医学图像分割的需求。
In medical image segmentation tasks, traditional U-Net networks often exhibit poor segmentation accuracy when handling complex structures and details in bladder tumor and retinal fundus MRI images. To address this issue, this study proposes an improved U-Net network model, Akmamba-Net. The model integrates the AKConv and Mamba-out modules, significantly enhancing the model’s feature extraction capability. The AKConv module improves the network’s adaptability and flexibility by introducing convolution operations and spatial resampling mechanisms, particularly in handling irregular tumor boundaries. The Mamba-out module further enhances the segmentation accuracy by optimizing feature fusion and boosting detailed information. Experimental results show that Akmamba-Net achieves Precision, Dice coefficient, and IoU metrics of 97.2%, 82.5%, and 71.9% for retinal fundus segmentation, and 89.64%, 89.98%, and 81.75% for bladder tumor segmentation, respectively. Compared with U-Net and other mainstream models, Akmamba-Net significantly improves segmentation accuracy, effectively enhancing the segmentation precision of retinal fundus and bladder tumor regions, thus meeting the needs of medical image segmentation.
%K U-Net,
%K Mamba-Out,
%K AKConv,
%K 残差网络
U-Net
%K Mamba-Out
%K AkConv
%K Residual Network
%U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=110205