全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于深度学习的无参考CT图像质量评估模型
A No-Reference CT Image Quality Assessment Model Based on Deep Learning

DOI: 10.12677/mos.2025.143214, PP. 186-198

Keywords: 无参考图像质量评估,临床胸部CT图像,视觉Transformer,知识蒸馏
NR-IQA
, Clinical Chest CT Images, Vision Transformer, Knowledge Distillation

Full-Text   Cite this paper   Add to My Lib

Abstract:

无参考CT图像质量评估(NR-IQA)旨在建立与放射科医生主观评估高度一致的客观图像质量评估体系。目前诸多临床CT图像数据集没有实际IQA评分,基于此,本研究提出一种基于深度学习的NR-IQA模型,并对其进行验证。NR-IQA模型将卷积模块(CNN)与视觉Transformer模块(ViT)结合,同时训练4个CNN-ViT网络集成作为教师模型,以模拟放射科医生多次主观IQA过程;接着结合知识蒸馏框架,将教师模型的信息蒸馏到1个学生模型(单一CNN-ViT网络)中。本研究结合峰值信噪比(PSNR)和结构相似性(SSIM)两个客观指标来评估CT图像质量,并使用其标注临床胸部CT图像数据集以验证NR-IQA模型。提出的NR-IQA模型综合性能达到2.8070,PLCC为0.9916,SROCC为0.9683,KRCC为0.8471,MAE低至0.0259,MSE仅为0.0010,验证了其预测CT图像IQA精度的优越性。
No-reference CT image quality assessment (NR-IQA) aims to establish an objective image quality evaluation system that achieves high consistency with radiologists’ subjective assessments. Given the current lack of actual IQA scores in numerous clinical CT image datasets, this paper proposed and validated a NR-IQA model based on deep learning. The proposed model integrated convolutional neural network (CNN) modules with visual Transformer (ViT) modules, and trained an ensemble of four CNN-ViT networks as teacher models to simulate radiologists’ repeated subjective IQA processes. Subsequently, a knowledge distillation framework was employed to transfer the information from teacher models to a student model (a single CNN-ViT network). This paper combined two objective metrics, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), to evaluate the quality of CT images, and used them to annotate the CT image dataset to validate the proposed NR-IQA model. The comprehensive performance of the proposed NR-IQA model reached 2.8070, achieving a Pearson linear correlation coefficient (PLCC) of 0.9916, Spearman rank-order correlation coefficient (SROCC) of 0.9683, Kendall rank correlation coefficient (KRCC) of 0.8471, with mean absolute error (MAE) reduced to 0.0259 and mean squared error (MSE) as low as 0.0010, validating its superior accuracy in predicting CT image IQA scores.

References

[1]  Xun, S.Y., Li, Q.Y., Liu, X.H., et al. (2025) Charting the Path Forward: CT Image Quality Assessment—An In-Depth Review. arXiv: 2405.00075.
[2]  Yi, X. and Babyn, P. (2018) Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network. Journal of Digital Imaging, 31, 655-669.
https://doi.org/10.1007/s10278-018-0056-0
[3]  Kasban, H., El-Bendary, M. and Salama, D. (2015) A Comparative Study of Medical Imaging Techniques. International Journal of Information Science and Intelligent System, 4, 37-58.
[4]  Gao, Q., Li, S., Zhu, M., Li, D., Bian, Z., Lv, Q., et al. (2020) Combined Global and Local Information for Blind CT Image Quality Assessment via Deep Learning. Medical Imaging 2020: Image Perception, Observer Performance, and Technology Assessment, Houston, 15-20 February 2020.
https://doi.org/10.1117/12.2548953
[5]  Lee, W., Cho, E., Kim, W., Choi, H., Beck, K.S., Yoon, H.J., et al. (2022) No-Reference Perceptual CT Image Quality Assessment Based on a Self-Supervised Learning Framework. Machine Learning: Science and Technology, 3, Article ID: 045033.
https://doi.org/10.1088/2632-2153/aca87d
[6]  Zarb, F., Rainford, L. and McEntee, M.F. (2010) Image Quality Assessment Tools for Optimization of CT Images. Radiography, 16, 147-153.
https://doi.org/10.1016/j.radi.2009.10.002
[7]  Bevabcmlal, A. (2016) Knowledge-Based Taxonomic Scheme for Full-Reference Objective Image Quality Measurement Models. Journal of Imaging Science and Technology, 60, 60406-1-60406-15.
[8]  Sara, U., Akter, M. and Uddin, M.S. (2019) Image Quality Assessment through FSIM, SSIM, MSE and PSNR—A Comparative Study. Journal of Computer and Communications, 7, 8-18.
https://doi.org/10.4236/jcc.2019.73002
[9]  Wang, Z., Bovik, A.C., Sheikh, H.R. and Simoncelli, E.P. (2004) Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing, 13, 600-612.
https://doi.org/10.1109/tip.2003.819861
[10]  Rehman, A. and Wang, Z. (2012) Reduced-Reference Image Quality Assessment by Structural Similarity Estimation. IEEE Transactions on Image Processing, 21, 3378-3389.
https://doi.org/10.1109/tip.2012.2197011
[11]  Bampis, C.G., Gupta, P., Soundararajan, R. and Bovik, A.C. (2017) Speed-QA: Spatial Efficient Entropic Differencing for Image and Video Quality. IEEE Signal Processing Letters, 24, 1333-1337.
https://doi.org/10.1109/lsp.2017.2726542
[12]  Zhang, Y., Phan, T.D. and Chandler, D.M. (2017) Reduced-Reference Image Quality Assessment Based on Distortion Families of Local Perceived Sharpness. Signal Processing: Image Communication, 55, 130-145.
https://doi.org/10.1016/j.image.2017.03.020
[13]  Lu, Y., Fu, J., Li, X., Zhou, W., Liu, S., Zhang, X., et al. (2022) RTN: Reinforced Transformer Network for Coronary CT Angiography Vessel-Level Image Quality Assessment. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S. and Li, S., Eds., Medical Image Computing and Computer Assisted InterventionMICCAI 2022, Springer, 644-653.
https://doi.org/10.1007/978-3-031-16431-6_61
[14]  Baldeon Calisto, M.G., Rivera-Velastegui, F., Lai-Yuen, S.K., Riofrío, D., Pérez, N., Benítez, D., et al. (2024) Distilling Vision Transformers for No-Reference Perceptual CT Image Quality Assessment. Medical Imaging 2024: Image Processing, San Diego, 19-22 February 2024.
https://doi.org/10.1117/12.3004838
[15]  Xun, S., Jiang, M., Huang, P., Sun, Y., Li, D., Luo, Y., et al. (2024) Chest CT-IQA: A Multi-Task Model for Chest CT Image Quality Assessment and Classification. Displays, 84, Article ID: 102785.
https://doi.org/10.1016/j.displa.2024.102785
[16]  Gao, Q., Li, S., Zhu, M., Li, D., Bian, Z., Lyu, Q., et al. (2019) Blind CT Image Quality Assessment via Deep Learning Framework. 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), Manchester, 26 October-2 November 2019, 1-4.
https://doi.org/10.1109/nss/mic42101.2019.9059777
[17]  Ayaan, H., Adam, W. and Abdullan-al-zubaer, I. (2022) Noise2Quality: Non-Reference, Pixel-Wise Assessment of Low Dose CT Image Quality. Image Perception, Observer Performance, and Technology Assessment: Medical Imaging 2022, San Francisco, 20-24 February 2022, 120351C-1-120351C-6.
[18]  Mudeng, V., Kim, M. and Choe, S. (2022) Prospects of Structural Similarity Index for Medical Image Analysis. Applied Sciences, 12, Article 3754.
https://doi.org/10.3390/app12083754
[19]  Hore, A. and Ziou, D. (2010) Image Quality Metrics: PSNR vs. SSIM. 2010 20th International Conference on Pattern Recognition, Istanbul, 23-26 August 2010, 2366-2369.
https://doi.org/10.1109/icpr.2010.579
[20]  Cavaro-Menard, C., Zhang, L. and Le Callet, P. (2010) Diagnostic Quality Assessment of Medical Images: Challenges and Trends. 2010 2nd European Workshop on Visual Information Processing (EUVIP), Paris, 5-6 July 2010, 277-284.
https://doi.org/10.1109/euvip.2010.5699147
[21]  Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al. (2021) An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv: 2010.11929.
[22]  Hinton, G., Vinyals, O. and Dean, J. (2015) Distilling the Knowledge in a Neural Network. arXiv: 1503.02531.
[23]  Zhao, Q., Zhong, L., Xiao, J., Zhang, J., Chen, Y., Liao, W., et al. (2023) Efficient Multi-Organ Segmentation from 3D Abdominal CT Images with Lightweight Network and Knowledge Distillation. IEEE Transactions on Medical Imaging, 42, 2513-2523.
https://doi.org/10.1109/tmi.2023.3262680
[24]  刘泽奇, 王宁, 张冲, 魏国辉. 基于轻量化网络与知识蒸馏策略的心脏核磁共振图像分割[J]. 生物医学工程学杂志, 2024, 41(6): 1204-1212.
[25]  Chen, G., Choi, W., Yu, X., et al. (2018) Learning Efficient Object Detection Models with Knowledge Distillation. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, 4-9 December 2017, 742-751.
https://dl.acm.org/doi/10.5555/3294771.3294842
[26]  Saputra, M.R.U., Gusmao, P., Almalioglu, Y., Markham, A. and Trigoni, N. (2019) Distilling Knowledge from a Deep Pose Regressor Network. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, 27 October-2 November 2019, 263-272.
https://doi.org/10.1109/iccv.2019.00035
[27]  Su, S., Yan, Q., Zhu, Y., Zhang, C., Ge, X., Sun, J., et al. (2020) Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, 13-19 June 2020, 3664-3673.
https://doi.org/10.1109/cvpr42600.2020.00372
[28]  Wu, J., Ma, J., Liang, F., Dong, W., Shi, G. and Lin, W. (2020) End-to-End Blind Image Quality Prediction with Cascaded Deep Neural Network. IEEE Transactions on Image Processing, 29, 7414-7426.
https://doi.org/10.1109/tip.2020.3002478
[29]  Lee, W., Wagner, F., Galdran, A., Shi, Y., Xia, W., Wang, G., et al. (2025) Low-Dose Computed Tomography Perceptual Image Quality Assessment. Medical Image Analysis, 99, Article ID: 103343.
https://doi.org/10.1016/j.media.2024.103343
[30]  Xu, L. and Chen, Q. (2019) Remote-sensing Image Usability Assessment Based on Resnet by Combining Edge and Texture Maps. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12, 1825-1834.
https://doi.org/10.1109/jstars.2019.2914715
[31]  Jiang, T., Hu, X., Yao, X., Tu, L., Huang, J., Ma, X., et al. (2021) Tongue Image Quality Assessment Based on a Deep Convolutional Neural Network. BMC Medical Informatics and Decision Making, 21, Article No. 147.
https://doi.org/10.1186/s12911-021-01508-8
[32]  Gao, F., Yu, J., Zhu, S., Huang, Q. and Tian, Q. (2018) Blind Image Quality Prediction by Exploiting Multi-Level Deep Representations. Pattern Recognition, 81, 432-442.
https://doi.org/10.1016/j.patcog.2018.04.016
[33]  Sun, J., Wan, C., Cheng, J., Yu, F. and Liu, J. (2017) Retinal Image Quality Classification Using Fine-Tuned CNN. In: Cardoso, M., et al., Eds., Fetal, Infant and Ophthalmic Medical Image Analysis, Springer, 126-133.
https://doi.org/10.1007/978-3-319-67561-9_14

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133