全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于改进YOLOX的茶叶嫩芽目标检测研究
Research on Tea Sprouts Object Detection Based on Improved YOLOX

DOI: 10.12677/SEA.2022.116144, PP. 1404-1414

Keywords: 茶叶嫩芽,YOLOX,目标检测
Tea Sprouts
, YOLOX, Object Detection

Full-Text   Cite this paper   Add to My Lib

Abstract:

茶产业是我国进出口贸易商品的一大重要方面,茶在我国有着悠久的文化底蕴,与我国人民的生活密切相关。为了满足名优茶的茶叶嫩芽智能采摘需求,本文首先建立了自然环境下的茶叶嫩芽数据集,并提出了一种基于Swin-Transformer的改进YOLOX茶叶嫩芽检测模型——YOLOX-ST。该模型将Swin-Transformer作为原始YOLOX模型的骨干网络,提高了模型整体的检测精度,并引入了CBAM注意力机制,解决复杂环境背景下容易错检漏检的情况。实验结果表明,该模型的mAP值达到了79.12%,比原始模型提高了5.2%,精确度达到了90.45%,比原始模型提高了4.62%。与同系列的YOLOv3、YOLOv4、以及YOLOv5模型相比,YOLOX-ST的mAP以及准确率最高分别提升了7.09%和6.43%,拥有良好的检测精度与模型泛化能力。由此可见,该模型为茶叶嫩芽的智能化采摘奠定了一个良好的基础。
Tea industry is an important aspect of China’s import and export trade commodities. Tea has a long cultural heritage in China, and is closely related to the life of our people. In order to meet the acquirement of premium tea sprouts, this paper first established the dataset of tea sprouts based on natural environment, and proposed a modified YOLOX tea sprout detection model based on Swin-Transformer—YOLOX-ST. The proposed model used the Swin-Transformer as the backbone network, which improves the overall detection accuracy of the model. And it also introduced the CBAM attention mechanism to solve the problem of miss-detection and wrong detection in the complex environment. Experimental results showed that the proposed model has a mAP value of 79.12%, which is 5.2% higher than the original YOLOX model, and the accuracy has achieved 90.45%, which is 4.62% higher than the YOLOX model. Compared with YOLOv3, YOLOv4, and YOLOv5, the mAP and accuracy rate of YOLOX-ST model increased by 7.09% and 6.43% at most, respectively, which has good detection accuracy and model generalization ability. This model has laid a good foundation for the intelligent picking of premium tea sprouts.

References

[1]  严春雨, 李飞. 基于改进MobileNetV2的茶叶病害识别方法[J]. 软件工程与应用, 2022, 11(4): 743-750.
https://doi.org/10.12677/SEA.2022.114077
[2]  夏华鵾, 史必高, 黄海霞, 等. 图像处理在茶叶嫩芽智能采摘中的应用进展[J]. 安徽农学通报, 2019, 25(9): 133-134.
[3]  吕军, 方梦瑞, 姚青, 等. 基于区域亮度自适应校正的茶叶嫩芽检测模型[J]. 农业工程学报, 2021, 37(22): 278-285.
[4]  杨福增, 杨亮亮, 田艳娜, 等. 基于颜色和形状特征的茶叶嫩芽识别方[J]. 农业机械学报, 2009, 40(z1): 119-123.
[5]  姜苗苗, 问美倩, 周宇, 等. 基于颜色因子与图像融合的茶叶嫩芽检测方法[J]. 农业装备与车辆工程, 2020, 58(10): 44-47.
[6]  施莹莹, 李祥瑞, 孙凡. 基于YOLOv3的自然环境下茶叶嫩芽目标检测方法研究[J]. 电脑知识与技术, 2021, 17(3): 14-16.
[7]  邹倩, 陆安江, 周骅, 等. 改进YOLOV3的茶叶嫩芽检测研究[J]. 激光杂志, 2022(3): 43.
[8]  Ge, Z., Liu, S.T., Wang, F., Li, Z.M. and Sun, J. (2021) YOLOX: Exceeding YOLO Series in 2021.
[9]  Devlin, J., et al. (2019) BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.
[10]  Dosovitskiy, A., Beyer, L., et al. (2021) An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale.
[11]  Liu, Z., Lin, Y.T., et al. (2021) Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows.
[12]  田应仲, 卜雪虎. 基于注意力机制与Swin Transformer模型的腰椎图像分割方法[J]. 计量与测试技术, 2021, 48(12): 57-61.
[13]  Woo, S., Park, J., et al. (2018). CBAM: Convolutional Block Attention Module.
[14]  余帅, 汪西莉. 基于多级通道注意力的遥感图像分割方法[J]. 激光与光电子学进展, 2020, 57(4): 134-143.
[15]  张连超, 乔瑞萍, 党祺玮, 等. 具有全局特征的空间注意力机制[J]. 西安交通大学学报, 2020, 54(11): 129-138.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133