Deep reinforcement learning of event-triggered communication and consensus-based control for distributed cooperative transport
【Author】 Shibata, Kazuki; Jimbo, Tomohiko; Matsubara, Takamitsu
【Source】ROBOTICS AND AUTONOMOUS SYSTEMS
【影响因子】3.700
【Abstract】In this paper, we present a solution to a design problem of control strategies for multi-agent cooperative transport. Although existing learning-based methods assume that the number of agents is the same as that in the training environment, the number might differ in reality considering that the robots' batteries may completely discharge, or additional robots may be introduced to reduce the time required to complete a task. Therefore, it is crucial that the learned strategy be applicable to scenarios wherein the number of agents differs from that in the training environment. In this paper, we propose a novel multi-agent reinforcement learning framework of event-triggered communication and consensus-based control for distributed cooperative transport. The proposed policy model estimates the resultant force and torque in a consensus manner using the estimates of the resultant force and torque with the neighborhood agents. Moreover, it computes the control and communication inputs to determine when to communicate with the neighboring agents under local observations and estimates of the resultant force and torque. Therefore, the proposed framework can balance the control performance and communication savings in scenarios wherein the number of agents differs from that in the training environment. We confirm the effectiveness of our approach by using a maximum of eight and six robots in the simulations and experiments, respectively. (c) 2022 Elsevier B.V. All rights reserved.
【Keywords】Cooperative transport; Multi-agent reinforcement learning; Event-triggered control; Consensus algorithm
【发表时间】2023 JAN
【收录时间】2023-06-28
【文献类型】
【主题类别】
--
评论