全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于YOLOv5s的注意力改进研究
Research on Attention Improvement Based on YOLOv5s

DOI: 10.12677/CSA.2022.122037, PP. 366-375

Keywords: 深度学习,目标检测,YOLOv5s,注意力
Deep Learning
, Object Detection, YOLOv5s, Attention

Full-Text   Cite this paper   Add to My Lib

Abstract:

随着时间的推进和硬件的不停发展,计算机的计算能力也得到了极大的提升,相应地,以算力为支持的深度学习也得到了飞速发展。作为深度学习的一个分支,目标检测算法的研究愈显突出。针对算法落地以及实时检测的要求,提出了基于YOLOv5s的注意力改进,在相同实验环境下,以不同的改进条件,将同一数据集输入给YOLOv5s训练和测试,通过tensorboard可视化结果得出,所提出的改进对YOLOv5s的准确率、召回率以及mAP有明显提升,对满足实际需求更近一步。
With the advancement of time and the continuous development of hardware, the computing power of computer has also been greatly improved. Accordingly, deep learning supported by computing power has also developed rapidly. As a branch of deep learning, the research of object detection algorithm is becoming more and more prominent. According to the requirements of algorithm landing and real-time detection, an attention improvement based on yoov5s is proposed. Under the same experimental environment and different improvement conditions, the same data set is input to YOLOv5s for training and testing. Through the tensorboard visualization results, it is concluded that the proposed improvement has significantly improved the accuracy, recall and mAP of YOLOv5s, which is a step closer to meeting the actual needs.

References

[1]  高明, 左红群, 柏帆, 田清阳, 葛志峰, 董兴宁, 甘甜. 融合视觉关系检测的电力场景自动危险预警[J]. 中国图象图形学报, 2021, 26(7): 1583-1593.
[2]  李厚杰, 王法胜, 贺建军, 周瑜, 李威, 窦宇轩. 基于伪样本正则化Faster R-CNN的交通标志检测[J]. 吉林大学学报(工学版), 2021, 51(4): 1251-1260.
[3]  管子玉. 人工智能赋能智慧医疗[J]. 西北大学学报(自然科学版), 2021, 51(1): 1-32.
[4]  王瑶, 胥辉旗, 姜义, 张鑫. 基于深度学习的舰船目标检测技术发展综述[J]. 飞航导弹, 2021(2): 76-81.
[5]  张新钰, 邹镇洪, 李志伟, 刘华平, 李骏. 面向自动驾驶目标检测的深度多模态融合技术[J]. 智能系统学报, 2020, 15(4): 758-771.
[6]  Girshick, R., Donahue, J., Darrell, T. and Malik, J. (2014) Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 23-28 June 2014, 580-587.
https://doi.org/10.1109/CVPR.2014.81
https://arxiv.org/abs/1311.2524.
[7]  Girshick, R. (2015) Fast R-CNN. 2015 IEEE International Conference on Computer Vision, Santiago, 7-13 December 2015, 1440-1448.
https://doi.org/10.1109/ICCV.2015.169
https://arxiv.org/abs/1504.08083
[8]  Ren, S.Q., He, K.M., Girshick, R. and Sun, J. (2016) Faster R-CNN: To-wards Real-Time Object Detection with Region Proposal Networks.
https://arxiv.org/abs/1506.01497
[9]  吴雪, 宋晓茹, 高嵩, 陈超波. 基于深度学习的目标检测算法综述[J]. 传感器与微系统, 2021, 40(2): 4-7+18.
[10]  张泽苗, 霍欢, 赵逢禹. 深层卷积神经网络的目标检测算法综述[J]. 小型微型计算机系统, 2019, 40(9): 1825-1831.
[11]  方路平, 何杭江, 周国民. 目标检测算法研究综述[J]. 计算机工程与应用, 2018, 54(13): 11-18+33.
[12]  Rajeshwari, P., Abhishek, P., Srikanth, P. and Vinod, T. (2019) Object Detection: An Overview. In-ternational Journal of Trend in Scientific Research and Development, 3, 1663-1665.
https://doi.org/10.31142/ijtsrd23422
[13]  Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016) You Only Look Once: Unified, real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recongnition, Las Vegas, 27-30 June 2016, 779-788.
https://doi.org/10.1109/CVPR.2016.91
[14]  Redmon, J. and Farhadi, A. (2017) YOLO9000: Better, Faster, Stronger. IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21-26 July 2017, 6517-6525.
https://doi.org/10.1109/CVPR.2017.690
[15]  Redmon, J. and Farhadi, A. (2018) YOLOv3: An Incremental Im-provement.
https://arxiv.org/abs/1804.02767
[16]  Bochkovskiy, A., Wang, C.Y. and Liao, H.M. (2020) YOLOv4: Optimal Speed and Accuracy of Object Detection.
https://arxiv.org/abs/2004.10934
[17]  Wang, C.Y., Liao, H.M., Yeh, I., Wu, Y.-H., Chen, P.-Y. and Hsieh, J.-W. (2019) CSPNet: A New Backbone That Can Enhance Learning Capa-bility of CNN. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, 14-19 June 2020, 1571-1580.
https://doi.org/10.1109/CVPRW50498.2020.00203
https://arxiv.org/abs/1911.11929

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133