|
基于改进Resnet50的角膜塑形镜智能验配方法研究
|
Abstract:
通过彩色角膜地形图图像进行角膜塑形镜(Orthokeratology)智能验配,解决角膜塑形镜验配需要医生具备大量的验配经验的问题,本研究旨在提出一种基于改进的Resnet50网络的露晰得(Lucid)角膜塑形镜智能验配算法。在Resnet50的基础上,通过全局注意力机制GAM (Global Attention Mechanism)捕捉在通道、空间宽度和空间高度三个维度的特征,强化Resnet50对角膜塑形镜的三个参数进行图像分类的识别能力;并采用类别激活图(Class Activation Map, CAM)技术绘制模型关注角膜地形图特征的热力图。本文希望通过这种方法,为用户提供更加精准和个性化的角膜塑形镜验配方案。所提Resnet50-GAM模型在角膜塑形镜三个主要镜片参数,镜片直径(D)、环曲度(CP)以及平面镜片测量读数(镜片曲率半径)的图像分类上分别取得了89.2%、86.6%和79.1%的结果热力图显示Resnet50-GAM模型在分类过程中与验光师关注的点基本一致;所提Resnet50-GAM模型可用于边远地区、低收入和中等收入国家以及实验室设备资源有限的地区,以克服眼视光医生短缺的问题,提高角膜塑形镜的普及率。
The aim of this study is to develop an intelligent fitting algorithm for orthokeratology lens based on color corneal topographic map images. Orthokeratology fitting requires a lot of experience from optometrists, which poses a challenge for many users. The aim of this study is to propose an intelligent matching algorithm of lucid plastic orthokeratology lens based on improved Resnet50 network. On the basis of Resnet50, the Global Attention Mechanism (GAM) captures the features in the three dimensions of channel, spatial width and spatial height, and strengthens the recognition ability of Resnet50 in image classification of the three parameters of orthokeratology mirror. Class Activation Map (CAM) was used to create a thermal map of corneal topographic features. Through this method, this paper hopes to provide users with more accurate and personalized orthokeratology lens fitting scheme. The Resnet50-GAM model has three main lens parameters in orthokeratology. The results of image classification of lens diameter (D), annular curvature (CP) and plane lens measurement reading (lens curvature radius) were 89.2%, 86.6% and 79.1%, respectively. Thermal maps showed that the Resnet50-GAM model was basically consistent with optometrists’ concerns in the classification process. The proposed Resnet50-GAM model can be used in remote areas, low - and middle-income countries, and areas with limited laboratory equipment resources to overcome the shortage of optometrists and increase the penetration of orthokeratology lens.
[1] | Zhu, Z., Chen, Y., Tan, Z., et al. (2023) Interventions Recommended for Myopia Prevention and Control among Children and Adolescents in China: A Systematic Review. British Journal of Ophthalmology, 107, 160-166.
https://bjo.bmj.com/content/107/2/160.citation-tools https://doi.org/10.1136/bjophthalmol-2021-319306 |
[2] | Baird, P.N., Saw, S.M., Lanca, C., et al. (2020) Myopia. Nature Reviews Disease Primers, 6, Article No. 99.
https://www.nature.com/articles/s41572-020-00231-4 https://doi.org/10.1038/s41572-020-00231-4 |
[3] | Dhiman, R., Rakheja, V., Gupta, V., et al. (2022) Current Concepts in the Management of Childhood Myopia. Indian Journal of Ophthalmology, 70, 2800-2815. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9672783/ https://doi.org/10.4103/ijo.IJO_2098_21 |
[4] | Wu, P.C., Chen, C.T., Lin, K.K., et al. (2018) Myopia Prevention and Outdoor Light Intensity in a School-Based Cluster Randomized Trial. Ophthalmology, 125, 1239-1250.
https://www.sciencedirect.com/science/article/abs/pii/S0161642017303676 |
[5] | Cho, P. and Tan, Q. (2019) Myopia and Orthokeratology for Myopia Control. Clinical and Experimental Optometry, 102, 364-377. https://www.tandfonline.com/doi/abs/10.1111/cxo.12839 https://doi.org/10.1111/cxo.12839 |
[6] | Swarbrick, H.A. (2006) Orthokeratology Review and Update. Clinical and Experimental Optometry, 89, 124-143.
https://www.tandfonline.com/doi/abs/10.1111/j.1444-0938.2006.00044.x https://doi.org/10.1111/j.1444-0938.2006.00044.x |
[7] | Redd, T.K., Campbell, J.P., Brown, J.M., et al. (2019) Evaluation of a Deep Learning Image Assessment System for Detecting Severe Retinopathy of Prematurity. British Journal of Ophthalmology, 103, 580-584.
https://bjo.bmj.com/content/103/5/580.abstract https://doi.org/10.1136/bjophthalmol-2018-313156 |
[8] | Yang, Y., Li, R., Lin, D., et al. (2020) Automatic Identification of Myopia Based on Ocular Appearance Images Using Deep Learning. Annals of Translational Medicine, 8, Article ID: 705.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7327333/ https://doi.org/10.21037/atm.2019.12.39 |
[9] | Ma, S., Guan, Y., Yuan, Y., et al. (2020) A One-Step, Streamlined Children’s Vision Screening Solution Based on Smartphone Imaging for Resource-Limited Areas: Design and Preliminary Field Evaluation. JMIR mHealth and uHealth, 8, e18226. https://mhealth.jmir.org/2020/7/e18226 https://doi.org/10.2196/18226 |
[10] | 黄峻嘉, 张琪, 赵娜, 等. 基于近视筛查数据的近视影响因素分析和近视预测[J]. 电子科技大学学报, 2021, 50(2): 256-260. |
[11] | Fan, Y., Yu, Z., Peng, Z., et al. (2021) Machine Learning Based Strategy Surpasses the Traditional Method for Selecting the First Trial Lens Parameters for Corneal Refractive Therapy in Chinese Adolescents with Myopia. Contact Lens and Anterior Eye, 44, Article ID: 101330. https://www.sciencedirect.com/science/article/abs/pii/S1367048420300977 https://doi.org/10.1016/j.clae.2020.05.001 |
[12] | Coyle, D. and Weller, A. (2020) “Explaining” Machine Learning Reveals Policy Challenges. Science, 368, 1433-1434.
https://www.science.org/doi/abs/10.1126/science.aba9647 https://doi.org/10.1126/science.aba9647 |
[13] | Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. and Torralba, A. (2016) Learning Deep Features for Discriminative Localization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 2921-2929.
https://openaccess.thecvf.com/content_cvpr_2016/html/Zhou_Learning_Deep_Features_CVPR_2016_paper.html https://doi.org/10.1109/CVPR.2016.319 |
[14] | Arbelaez, M.C., Versaci, F., Vestri, G., et al. (2012) Use of a Support Vector Machine for Keratoconus and Subclinical Keratoconus Detection by Topographic and Tomographic Data. Ophthalmology, 119, 2231-2238.
https://www.sciencedirect.com/science/article/abs/pii/S0161642012005131 https://doi.org/10.1016/j.ophtha.2012.06.005 |
[15] | Chen, R., Mao, X., Jiang, J., et al. (2017) The Relationship between Corneal Biomechanics and Anterior Segment Parameters in the Early Stage of Orthokeratology: A Pilot Study. Medicine, 96, e6907.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5428640/ https://doi.org/10.1097/MD.0000000000006907 |
[16] | 孔乐. 两种角膜塑形镜矫治青少年近视对眼表影响的对比研究[D]: [硕士学位论文]. 苏州: 苏州大学, 2019. |
[17] | He, K., Zhang, X., Ren, S., et al. (2016) Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 770-778.
https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html https://doi.org/10.1109/CVPR.2016.90 |
[18] | Peters, J.J., Leitz, J., Guo, Q., et al. (2022) A Feature-Guided, Focused 3D Signal Permutation Method for Subtomogram Averaging. Journal of Structural Biology, 214, Article ID: 107851.
https://www.sciencedirect.com/science/article/pii/S1047847722000211 https://doi.org/10.1016/j.jsb.2022.107851 |
[19] | Luo, J., Tang, Y., Wang, J. and Liu, H.T. (2023) USMLP: U-Shaped Sparse-MLP Network for Mass Segmentation in Mammograms. Image and Vision Computing, 137, Article ID: 104761.
https://www.sciencedirect.com/science/article/abs/pii/S026288562300135X https://doi.org/10.1016/j.imavis.2023.104761 |
[20] | Du, L., Lu, Z. and Li, D. (2022) Broodstock Breeding Behaviour Recognition Based on Resnet50-LSTM with CBAM Attention Mechanism. Computers and Electronics in Agriculture, 202, Article ID: 107404.
https://www.sciencedirect.com/science/article/abs/pii/S026288562300135X https://doi.org/10.1016/j.compag.2022.107404 |
[21] | Sch?ttl, A. (2022) Improving the Interpretability of Gradcams in Deep Classification Networks. Procedia Computer Science, 200, 620-628. https://www.sciencedirect.com/science/article/pii/S1877050922002691 https://doi.org/10.1016/j.procs.2022.01.260 |
[22] | 贾楠, 李燕, 郭静霞, 等. 基于深度学习的COVID-19智能诊断系统[J]. 计算机测量与控制, 2023, 31(4): 96-103. |