全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

仓储中基于多智能体深度强化学习的多AGV路径规划
Multi-AGV Path Planning in Warehousing Based on Multi-Agent Deep Reinforcement Learning

DOI: 10.12677/MOS.2023.126481, PP. 5294-5302

Keywords: 路径规划,MADRL,AGV,仓储
Path Planning
, MADRL, AGV, Warehousing

Full-Text   Cite this paper   Add to My Lib

Abstract:

随着工业自动化和物流行业的迅速发展,自动引导车辆(Automated Guided Vehicle, AGV)在物流仓库中的路径规划已成为确保运输效率和准确性的关键环节。尽管近年来已经有很多策略被提出,但多AGV系统在复杂的物流环境中仍然频繁地出现碰撞、路径冲突以及控制迟延等问题。鉴于此,本研究提出了一种基于多智能体深度强化学习(Multi Agent Deep Reinforcement Learning, MADRL)的路径规划方法,以期解决多AGV之间的相互协调问题并提高其路径规划效率。为验证所提方法的有效性,我们采用了与遗传算法(Genetic Algorithm, GA)的比较实验。结果显示,基于MADRL的策略在整体运输效率上实现了28%的提升,并在碰撞事件上有了明显的减少。
With the rapid advancement of industrial automation and the logistics industry, the path planning of Automated Guided Vehicles (AGV) in logistics warehouses has become a critical component to ensure transportation efficiency and accuracy. Although numerous strategies have been proposed in recent years, multi-AGV systems still frequently encounter collisions, path conflicts, and control latencies in complex logistics environments. In light of this, our study introduces a path planning approach based on Multi-Agent Deep Reinforcement Learning (MADRL) aiming to address the coor-dination issues among multiple AGVs and to enhance their path planning efficiency. To validate the effectiveness of the proposed method, we conducted comparative experiments with the Genetic Al-gorithm (GA). Results show that the MADRL-based strategy achieved a 28% improvement in overall transportation efficiency and a significant reduction in collision incidents.

References

[1]  Li, Z., Sang, H., Pan, Q., et al. (2022) Dynamic AGV Scheduling Model with Special Cases in Matrix Production Workshop. IEEE Transactions on Industrial Informatics, 19, 7762-7770.
https://doi.org/10.1109/TII.2022.3211507
[2]  Maximilian, M. (2022) Mensch-KI-Kollaboration in der Smart Factory. Maschinenbau, 2, 1-9.
https://doi.org/10.1007/s44029-022-0718-z
[3]  Fouad, B. and Dirk, R. (2022) A Review of the Applications of Mul-ti-Agent Reinforcement Learning in Smart Factories. Frontiers in Robotics and AI, 9, Article 1027340.
https://doi.org/10.3389/frobt.2022.1027340
[4]  Michel, R. (2022) Smart Factory Gets Efficiency Boost from AGV Lift Trucks. Modern Materials Handling, 77, page.
[5]  Xu, Y.X., Qi, L., Luan, W.J., Guo, X.W. and Ma, H.J. (2020) Load-In-Load-Out AGV Route Planning in Automatic Container Terminal. IEEE Access, 8, 157081-157088.
https://doi.org/10.1109/ACCESS.2020.3019703
[6]  Yu, R.R., Zhao, H., Zhen, S.C., et al. (2017) A Novel Trajectory Tracking Control of AGV Based on Udwadia-Kalaba Approach. IEEE/CAA Journal of Automatica Sinica, 1-13.
https://ieeexplore.ieee.org/document/7738999
[7]  Digani, V., Sabattini, L. and Secchi, C. (2016) A Probabilistic Eulerian Traffic Model for the Coordination of Multiple AGVs in Automatic Warehouses. IEEE Robotics and Automation Letters, 1, 26-32.
https://doi.org/10.1109/LRA.2015.2505646
[8]  Zhao, Y., Liu, X., Wang, G., et al. (2020) Dynamic Resource Reserva-tion Based Collision and Deadlock Prevention for Multi-AGVs. IEEE Access, 8, 82120-82130.
https://doi.org/10.1109/ACCESS.2020.2991190
[9]  Han, Y., Cheng, Y. and Xu, G. (2019) Trajectory Tracking Control of AGV Based on Sliding Mode Control with the Improved Reaching Law. IEEE Access, 7, 20748-20755.
https://doi.org/10.1109/ACCESS.2019.2897985
[10]  Zheng, Z., Qing, G., Juan, C., et al. (2018) Collision-Free Route Planning for Multiple AGVs in an Automated Warehouse Based on Collision Classification. IEEE Access, 6, 26022-26035.
https://doi.org/10.1109/ACCESS.2018.2819199
[11]  Digani, V., Sabattini, L., Secchi, C. and Fantuzzi, C. (2015) Ensem-ble Coordination Approach in Multi-AGV Systems Applied to Industrial Warehouses. IEEE Transactions on Automation Sci-ence and Engineering, 12, 922-934.
https://doi.org/10.1109/TASE.2015.2446614
[12]  Hu, H., Jia, X.L., Liu, K. and Sun, B.Y. (2021) Self-Adaptive Traffic Control Model with Behavior Trees and Reinforcement Learning for AGV in Industry 4.0. IEEE Transactions on Industrial Informatics, 17, 7968-7979.
https://doi.org/10.1109/TII.2021.3059676
[13]  Tang, H.T., Cheng, X.Y., Jiang, W.G. and Chen, S.W. (2021) Research on Equipment Configuration Optimization of AGV Unmanned Warehouse. IEEE Access, 9, 47946-47959.
https://doi.org/10.1109/ACCESS.2021.3066622
[14]  Tao, Q.Y., Sang, H.Y., Guo, H.W. and Wang, P. (2021) Improved Particle Swarm Optimization Algorithm for AGV Path Planning. IEEE Access, 9, 33522-33531.
https://doi.org/10.1109/ACCESS.2021.3061288
[15]  Arulkumaran, K., Deisenroth, M.P., Brundage, M. and Bharath, A.A. (2017) Deep Reinforcement Learning: A Brief Survey. IEEE Signal Processing Magazine, 34, 26-38.
https://doi.org/10.1109/MSP.2017.2743240
[16]  Du, W. and Ding, S. (2021) A Survey on Multi-Agent Deep Reinforce-ment Learning: From the Perspective of Challenges and Applications. Artificial Intelligence Review, 54, 3215-3238.
https://doi.org/10.1007/s10462-020-09938-y
[17]  Qiu, L., Hsu, W.J., Huang, S.Y. and Wang, H. (2002) Scheduling and Routing Algorithms for AGVs: A Survey. International Journal of Production Research, 40, 745-760.
https://doi.org/10.1080/00207540110091712
[18]  Pajarinen, J. and Peltonen, J. (2011) Periodic Finite State Controllers for Efficient POMDP and DEC-POMDP Planning. Advances in Neural Information Processing Systems, 24, 1-9.
[19]  Minsky, M. (1961) Steps toward Artificial Intelligence. Proceedings of the IRE, 49, 8-30.
https://doi.org/10.1109/JRPROC.1961.287775
[20]  Schulman, J., Moritz, P., Levine, S., et al. (2015) High-Dimensional Continuous Control Using Generalized Advantage Estimation. arXiv: 1506.02438.
[21]  He, J., Li, L., Xu, J., et al. (2018) ReLU Deep Neural Networks and Linear Finite Elements. arXiv: 1807.03973.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133