全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于机器视觉的工业机械臂自动抓取系统在电商物流中的应用研究
Application Research of Automatic Grasping System of Industrial Robot Arm Based on Machine Vision in E-Commerce Logistics

DOI: 10.12677/ecl.2024.1341345, PP. 1887-1893

Keywords: 工业机械臂,自动抓取系统,电商物流,深度学习
Industrial Robot Arm
, Automatic Grasping System, E-Commerce Logistics, Deep Learning

Full-Text   Cite this paper   Add to My Lib

Abstract:

随着电子商务的蓬勃发展,物流自动化特别是高效准确的物品分拣和抓取技术变得至关重要。针对传统机械臂在动态环境中适应性不足的问题,本文提出了一种基于机器视觉的工业机械臂自动抓取系统。该系统利用深度学习技术,通过实时处理图像数据,精确提取物品特征,并计算6D姿态信息,以指导机械臂执行准确的抓取动作。本研究的核心是一个深度神经网络,它结合了深度可分离卷积和U型网络的优势,显著提高了特征提取的效率和准确性,同时减少了模型的参数量,增强了系统的实时性。在Cornell抓取数据集上的实验结果表明,该系统达到了98.79%的准确率,证明了其在电商物流自动化领域的应用潜力。此外,系统还采用了高斯滤波和数据增强技术,进一步提升了模型的稳定性和泛化能力。对比实验显示,与其他现有技术相比,本系统在保持高准确率的同时,模型参数量更少,更适合实时应用。
With the booming development of e-commerce, logistics automation, especially efficient and accurate item sorting and grasping technology, has become crucial. Aiming at the problem of the lack of adaptability of traditional robot arm in dynamic environment, an automatic grasping system of industrial robot arm based on machine vision is proposed in this paper. The system uses deep learning technology to accurately extract item features by processing image data in real time, and calculates 6D pose information to guide the robot arm to perform accurate grasping actions. The core of this research is a deep neural network, which combines the advantages of deep separable Convolution and U-Net to significantly improve the efficiency and accuracy of feature extraction, while reducing the number of parameters in the model and enhancing the real-time performance of the system. The experimental results on the Cornell crawl data set show that the system achieves 98.79% accuracy, which proves its application potential in the field of e-commerce logistics automation. In addition, the system also adopts Gaussian filtering and data enhancement technology, which further improves the stability and generalization ability of the model. Comparative experiments show that compared with other prior art, the system has fewer model parameters while maintaining high accuracy, and is more suitable for real-time applications.

References

[1]  赵源麒, 王清珍. 基于视觉协同的双机械臂物品动态抓取系统设计与实现[J]. 仪器与设备, 2024, 12(2): 107-116.
[2]  杜学丹, 蔡莹皓, 鲁涛, 王硕, 闫哲. 一种基于深度学习的机械臂抓取方法[J]. 机器人, 2017, 39(6): 820-828, 837.
[3]  Du, G., Wang, K., Lian, S. and Zhao, K.Y. (2019) Vision-Based Robotic Grasping from Object Localization, Pose Estimation, Grasp Detection to Motion Planning: A Review. arXiv: 1905.06658.
[4]  Chollet, F. (2017) Xception: Deep Learning with Depthwise Separable Convolutions. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 21-26 July 2017, 1800-1807.
https://doi.org/10.1109/cvpr.2017.195
[5]  Rey, R., Corzetto, M., Cobano, J.A., Merino, L. and Caballero, F. (2019) Human-Robot Co-Working System for Warehouse Automation. 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Zaragoza, 10-13 September 2019, 278-585.
https://doi.org/10.1109/etfa.2019.8869178
[6]  Desaraju, V.R. and How, J.P. (2011) Decentralized Path Planning for Multi-Agent Teams in Complex Environments Using Rapidly-Exploring Random Trees. 2011 IEEE International Conference on Robotics and Automation, Shanghai, 9-13 May 2011, 4956-4961.
https://doi.org/10.1109/icra.2011.5980392
[7]  Morrison, D., Corke, P. and Leitner, J. (2019) Learning Robust, Real-Time, Reactive Robotic Grasping. The International Journal of Robotics Research, 39, 183-201.
https://doi.org/10.1177/0278364919859066
[8]  Miller, A.T. and Allen, P.K. (2004) Graspit! A Versatile Simulator for Robotic Grasping. IEEE Robotics & Automation Magazine, 11, 110-122.
https://doi.org/10.1109/mra.2004.1371616
[9]  Duan, H., Wang, P., Huang, Y., Xu, G., Wei, W. and Shen, X. (2021) Robotics Dexterous Grasping: The Methods Based on Point Cloud and Deep Learning. Frontiers in Neurorobotics, 15, Article 658280.
https://doi.org/10.3389/fnbot.2021.658280
[10]  Wang, S., Zhou, Z. and Kan, Z. (2022) When Transformer Meets Robotic Grasping: Exploits Context for Efficient Grasp Detection. IEEE Robotics and Automation Letters, 7, 8170-8177.
https://doi.org/10.1109/lra.2022.3187261
[11]  Jiang, Y., Moseson, S. and Saxena, A. (2011) Efficient Grasping from RGBD Images: Learning Using a New Rectangle Representation. 2011 IEEE International Conference on Robotics and Automation, Shanghai, 9-13 May 2011, 3304-3311.
https://doi.org/10.1109/icra.2011.5980145
[12]  Ronneberger, O., Fischer, P. and Brox, T. (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv: 1505.04597.
https://api.semanticscholar.org/CorpusID:3719281
[13]  Morrison, D., Corke, P. and Leitner, J. (2019) Learning Robust, Real-Time, Reactive Robotic Grasping. The International Journal of Robotics Research, 39, 183-201.
https://doi.org/10.1177/0278364919859066

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133