|
基于SAM2实现CT影像上骨骼肌的少样本分割
|
Abstract:
肌少症对患者的身体健康有诸多不利影响,而肌量的测量是诊断肌少症的重要手段之一。然而,现有的手动测量方法效率较低,限制了其在临床中的广泛应用。文章旨在提出一种适用于肌少症临床诊断和医学研究的自动分割方法,该方法能够在少量训练数据的情况下实现高精度的骨骼肌分割。我们的方法首先使用少量数据对YOLOv10和SAM2模型进行微调。然后,通过YOLOv10获取每个样本的矩形框坐标,并将其作为SAM2的提示词进行骨骼肌分割。实验结果显示,我们的方法在分割任务中的DSC评分达到了0.9025,优于传统U-Net架构的0.8763和使用固定矩形区域作为提示词的0.8887。此外,在测试数据集上的验证结果表明,85%的分割结果在临床应用的可接受范围内。综上所述,文章提出的方法在骨骼肌的少样本自动分割任务中表现出较高的精度和可靠性,具有潜在的临床应用价值。
Sarcopenia has numerous adverse effects on patients’ health, and muscle mass measurement is one of the important methods for diagnosing sarcopenia. However, the current manual measurement methods are inefficient, limiting their widespread clinical application. This study aims to propose an automatic segmentation method suitable for both clinical diagnosis and medical research of sarcopenia, which can achieve high-precision skeletal muscle segmentation with a small amount of training data. Our method first fine-tunes the YOLOv10 and SAM2 models using a small amount of data. Then, the rectangular box coordinates of each sample are obtained through YOLOv10 and used as prompts for SAM2 to perform skeletal muscle segmentation. Experimental results show that our method achieves a DSC score of 0.9025 in the segmentation task, outperforming the traditional U-Net architecture’s score of 0.8763 and the fixed rectangular region prompt’s score of 0.8887. Additionally, validation on the test dataset indicates that 85% of the segmentation results fall within the clinically acceptable range. In conclusion, the method proposed in the article demonstrates high accuracy and reliability in the task of few-shot automatic segmentation of skeletal muscle, showing potential clinical application value.
[1] | Kim, S.H., Hong, C.H., Shin, M., Kim, K.U., Park, T.S., Park, J.Y., et al. (2024) Prevalence and Clinical Characteristics of Sarcopenia in Older Adult Patients with Stable Chronic Obstructive Pulmonary Disease: A Cross-Sectional and Follow-Up Study. BMC Pulmonary Medicine, 24, Article No. 219. https://doi.org/10.1186/s12890-024-03034-5 |
[2] | Li, L., Xia, Z., Zeng, X., Tang, A., Wang, L. and Su, Y. (2024) The Agreement of Different Techniques for Muscle Measurement in Diagnosing Sarcopenia: A Systematic Review and Meta-Analysis. Quantitative Imaging in Medicine and Surgery, 14, 2177-2192. https://doi.org/10.21037/qims-23-1089 |
[3] | Zhang, S., Chen, H., Xu, H., Yi, Y., Fang, X. and Wang, S. (2022) Computed Tomography-Based Paravertebral Muscle Density Predicts Subsequent Vertebral Fracture Risks Independently of Bone Mineral Density in Postmenopausal Women Following Percutaneous Vertebral Augmentation. Aging Clinical and Experimental Research, 34, 2797-2805. https://doi.org/10.1007/s40520-022-02218-5 |
[4] | Cao, J., Zuo, D., Han, T., Liu, H., Liu, W., Zhang, J., et al. (2022) Correlation between Bioelectrical Impedance Analysis and Chest CT-Measured Erector Spinae Muscle Area: A Cross-Sectional Study. Frontiers in Endocrinology, 13, Article 923200. https://doi.org/10.3389/fendo.2022.923200 |
[5] | Ronneberger, O., Fischer, P. and Brox, T. (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W. and Frangi, A., Eds., Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Springer, 234-241. https://doi.org/10.1007/978-3-319-24574-4_28 |
[6] | Liu, X., Gao, P., Yu, T., Wang, F. and Yuan, R. (2025) CSWin-UNet: Transformer UNet with Cross-Shaped Windows for Medical Image Segmentation. Information Fusion, 113, Article ID: 102634. https://doi.org/10.1016/j.inffus.2024.102634 |
[7] | Saikia, F.N., Iwahori, Y., Suzuki, T., Bhuyan, M.K., Wang, A. and Kijsirikul, B. (2023) MLP-UNet: Glomerulus Segmentation. IEEE Access, 11, 53034-53047. https://doi.org/10.1109/access.2023.3280831 |
[8] | 徐旺旺, 许良凤, 李博凯, 等. TransAS-UNet: 融合Swin Transformer和UNet的乳腺癌区域分割[J]. 中国图象图形学报, 2024, 29(3): 741-754. |
[9] | Xia, W., Fortin, M., Ahn, J., Rivaz, H., Battié, M.C., Peters, T.M., et al. (2019) Automatic Paraspinal Muscle Segmentation in Patients with Lumbar Pathology Using Deep Convolutional Neural Network. In: Shen, D., et al., Eds., Medical Image Computing and Computer Assisted Intervention—MICCAI 2019, Springer, 318-325. https://doi.org/10.1007/978-3-030-32245-8_36 |
[10] | Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., et al. (2023) Segment Anything. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, 1-6 October 2023, 3992-4003. https://doi.org/10.1109/iccv51070.2023.00371 |
[11] | Ravi, N., Gabeur, V., Hu, Y.T., Hu, R.H., Ryali, C.K., Ma, T.Y., et al. (2024) SAM 2: Segment Anything in Images and Videos. arXiv: 2408.00714. |
[12] | Cheng, J.L., Ye, J., Deng, Z.Y., Chen, J.P., Li, T.X., Wang, H., et al. (2023) SAM-Med2D. arXiv: 2308.16184. |
[13] | Ahmad, N., Strand, R., Sparresäter, B., Tarai, S., Lundström, E., Bergström, G., et al. (2023) Automatic Segmentation of Large-Scale CT Image Datasets for Detailed Body Composition Analysis. BMC Bioinformatics, 24, Article No. 346. https://doi.org/10.1186/s12859-023-05462-2 |
[14] | Wang, A., Chen, H., Liu, L.H., Chen, K., Lin, Z.J., Han, J.G. and Ding, G.G. (2024) YoLOv10: Real-Time End-to-End Object Detection. arXiv: 2405.14458. |