全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于图像深度特征的稻种识别方法研究
Research on Rice Seed Recognition Method Based on Image Depth Feature

DOI: 10.12677/SEA.2022.116130, PP. 1272-1281

Keywords: 稻种识别,深度学习,卷积神经网络,支持向量机
Rice Seed Identification
, Deep Learning, Convolutional Neural Networks, Support Vector Machines

Full-Text   Cite this paper   Add to My Lib

Abstract:

随着水稻品种不断增加,如何识别稻种成为业内的一个难点,机器视觉技术较人工识别稻种识别准确度和时间更具优势,因而成为一种趋势。针对传统图像特征识别方法能力有限、识别精度低、速度慢的问题,本文提出使用L2正则化和dropout的卷积神经网络(convolutional neural networks, CNN)将原来得到的线性不可分图像深度特征数据变为线性可分的数据,通过支持向量机(support vector machine, SVM)代替传统CNN的softmax分类器对得到的线性数据进行分类,最后利用10个常用CNN模型 + SVM,通过准确性、敏感性、特异性、假阳性率、F1 Score和测试时间等方面对模型效果进行评估后,得出水稻识别结果。实验表明,该方法在稻种识别方面准确率达到99.8%,每张图片识别速度低至0.07 s,可用于混杂稻种的识别和分类。
With the increasing number of rice varieties, how to identify rice seeds has become a difficult point in the industry. Machine vision technology has become a trend as it has more advantages over manual identification of rice seeds in terms of accuracy and time. To address the problems of limited capability, low recognition accuracy and slow speed of traditional image feature recognition methods, in this paper, we propose a convolutional neural network (CNN) using L2 regularization and dropout to change the originally obtained linear non-separable image depth feature data into linear separable data by support vector machine (SVM) instead of traditional. Finally, the rice recognition results were obtained by evaluating the model effects in terms of accuracy, sensitivity, specificity, false positive rate, F1 Score and testing time using 10 commonly used CNN models + SVM. Experiments show that the method achieves 99.8% accuracy in rice seed recognition, and the recognition speed is as low as 0.07 s per image, which can be used for the recognition and classification of mixed rice seeds.

References

[1]  FAO of the United Nations (2022) Faostat Database.
http://www.fao.org/faostat
[2]  Qiu, Z.J., Chen, J., Zhao, Y.Y., et al. (2018) Variety Identification of Single Rice Seed Using Hyperspectral Imaging Combined with Convolutional Neural Network. Applied Sciences—Basel, 8, 212.
https://doi.org/10.3390/app8020212
[3]  Fabiyi, S.D., Vu, H., Tachtatzis, C., et al. (2020) Varietal Classification of Rice Seeds Using RGB and Hyperspectral Images. IEEE Access, 8, 22493-22505.
https://doi.org/10.1109/ACCESS.2020.2969847
[4]  Castillo, L.J.L., Galindo, J.A.M. and Rosal, J.E.C. (2020) A Supervised Learning Approach on Rice Variety Classification Using Convolutional Neural Networks. Association for Computing Machinery, Seoul, 18-23.
[5]  Jin, B.C., Zhang, C., Jia, L.Q., et al. (2022) Identification of Rice Seed Varieties Based on Near-Infrared Hyperspectral Imaging Technology Combined with Deep Learning. ACS Omega, 7, 4735-4749.
https://doi.org/10.1021/acsomega.1c04102
[6]  Huang, K.Y. and Chien, M.C. (2017) A Novel Method of Identifying Paddy Seed Varieties. Sensors, 17, 809.
https://doi.org/10.3390/s17040809
[7]  Kuo, T.Y., Chung, C.L., Chen, S.Y., et al. (2016) Identifying Rice Grains Using Image Analysis and Sparse Representation Based Classification. Computers and Electronics in Agriculture, 127, 716-725.
https://doi.org/10.1016/j.compag.2016.07.020
[8]  Sethy, P.K. and Chatterjee, A. (2018) Rice Variety Identification of Western Odisha Based on Geometrical and Texture Feature. International Journal of Applied Engineering Research, 13, 35-39.
[9]  Kiratiratanapruk, K., Temniranrat, P., Sinthupinyo, W., et al. (2020) Development of Paddy Rice Seed Classification Process Using Machine Learning Techniques for Automatic Grading Machine. Journal of Sensors, 2020, Article ID: 7041310.
https://doi.org/10.1155/2020/7041310
[10]  Guo, S., Chen, S., Li, Y., et al. (2016) Face Recognition Based on Convolutional Neural Network and Support Vector Machine. IEEE International Conference on Information and Automation (ICIA), Ningbo, 1-3 August 2016, 1787-1792.
https://doi.org/10.1109/ICInfA.2016.7832107
[11]  Wiatowski, T. and Bolcskei, H. (2018) A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction. IEEE Transactions on Information Theory, 64, 1845-1866.
https://doi.org/10.1109/TIT.2017.2776228
[12]  Simonyan, K. and Zisserman, A. (2015) Very Deep Convolutional Networks for Large-Scale Image Recognition. International Conference on Learning Representations (ICLR), San Diego, 7-9 May 2015, 1-14.
[13]  Szegedy, C., Liu, W., Jia, Y., et al. (2015) Going Deeper with Convolutions. The Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, 7-12 June 2015, 1-9.
https://doi.org/10.1109/CVPR.2015.7298594
[14]  He, K., Zhang, X., Ren, S., et al. (2016) Deep Residual Learning for Image Recognition. The Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 770-778.
https://doi.org/10.1109/CVPR.2016.90
[15]  Tan, M. and Le, Q. (2019) EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the International Conference on Machine Learning, PMLR, Volume 97, 6105-6144.
[16]  Huang, G., Liu, Z., Maaten, L.V.D., et al. (2017) Densely Connected Convolutional Networks. The Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21-26 July 2017, 4700-4708.
https://doi.org/10.1109/CVPR.2017.243
[17]  Sandler, M., Howard, A., Zhu, M., et al. (2018) Mobilenetv2: Inverted Residuals and Linear Bottlenecks. The Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-22 June 2018, 4510-4520.
https://doi.org/10.1109/CVPR.2018.00474
[18]  Ma, N., Zhang, X., Zheng, H.T., et al. (2018) Shufflenet v2: Practical Guidelines for Efficient CNN Architecture Design. The Proceedings of the European Conference on Computer Vision (ECCV), Munich, 8-14 September 2018, 116-131.
https://doi.org/10.1007/978-3-030-01264-9_8
[19]  Chen, Y.S., Jiang, H.L., Li, C.Y., et al. (2016) Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Transactions on Geoscience and Remote Sensing, 54, 6232-6251.
https://doi.org/10.1109/TGRS.2016.2584107
[20]  许宏科, 秦严严, 陈会茹. 一种基于改进Canny的边缘检测算法[J]. 红外技术, 2014, 36(3): 210-214.
[21]  Cubuk, E.D., Zoph, B., Mane, D., et al. (2019) Autoaugment: Learning Augmentation Strategies from Data. Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, 16-21 June 2019, 113-123.
https://doi.org/10.1109/CVPR.2019.00020

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133