Reinforcement learning for facilitating human-robot-interaction in manufacturing

被引:75
作者
Oliff, Harley [1 ]
Liu, Ying [1 ]
Kumar, Maneesh [2 ]
Williams, Michael [3 ]
Ryan, Michael [1 ]
机构
[1] Cardiff Univ, Sch Engn, Inst Mech & Mfg Engn, Cardiff CF24 3AA, Wales
[2] Cardiff Univ, Cardiff Business Sch, Cardiff CF10 3EU, Wales
[3] Olympus Surg Technol Europe, Cardiff, Wales
关键词
Intelligent manufacturing; Reinforcement learning; Human-robot interaction; Human factors; Adaptability; CYBER-PHYSICAL SYSTEMS; INDUSTRY; 4.0; ARCHITECTURE; FUTURE; COLLABORATION; INTELLIGENCE; FATIGUE;
D O I
10.1016/j.jmsy.2020.06.018
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
For many contemporary manufacturing processes, autonomous robotic operators have become ubiquitous. Despite this, the number of human operators within these processes remains high, and as a consequence, the number of interactions between humans and robots has increased in this context. This is a problem, as human beings introduce a source of disturbance and unpredictability into these processes in the form of performance variation. Despite the natural human aptitude for flexibility, their presence remains a source of disturbance within the system and make modelling and optimization of these systems considerably more challenging, and in many cases impossible. Improving the ability of robotic operators to adapt their behaviour to variations in human task performance is, therefore, a significant challenge to be overcome to enable many ideas in the larger intelligent manufacturing paradigm to be realised. This work presents the development of a methodology to effectively model these systems and a reinforcement learning agent capable of autonomous decision-making. This decision-making provides the robotic operators with greater adaptability, by enabling its behaviour to change based on observed information, both of its environment and human colleagues. The work extends theoretical knowledge on how learning methods can be implemented for robotic control, and how the capabilities that they enable may be leveraged to improve the interaction between robots and their human counterparts. The work further presents a novel methodology for the implementation of a reinforcement learning-based intelligent agent which enables a change in behavioural policy in robotic operators in response to performance variation in their human colleagues. The development and evaluation are supported by a generalized simulation model, which is parameterized to enable appropriate variation in human performance. The evaluation demonstrates that the reinforcement agent can effectively learn to make adjustments to its behaviour based on the knowledge extracted from observed information, and balance the task demands to optimise these adjustments.
引用
收藏
页码:326 / 340
页数:15
相关论文
共 93 条
  • [1] Abadi M., 2016, TENSORFLOW LARGE SCA
  • [2] Agravante DJ, 2014, IEEE INT CONF ROBOT, P607, DOI 10.1109/ICRA.2014.6906917
  • [3] CEDULE: A Scheduling Framework for Burstable Performance in Cloud Computing
    Ali, Ahsan
    Pinciroli, Riccardo
    Yan, Feng
    Smirni, Evgenia
    [J]. 15TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING (ICAC 2018), 2018, : 141 - 150
  • [4] ACT - A simple theory of complex cognition
    Anderson, JR
    [J]. AMERICAN PSYCHOLOGIST, 1996, 51 (04) : 355 - 365
  • [5] [Anonymous], 2001, DES APPL
  • [6] [Anonymous], 2015, Nature, DOI [DOI 10.1038/NATURE14539, 10.1038/nature14539]
  • [7] [Anonymous], 2020, J MANUF SYST, DOI DOI 10.1016/j.jmsy.2020.01.002
  • [8] [Anonymous], 2016, Asynchronous methods for deep reinforcement learning
  • [9] [Anonymous], 2000, ADV NEURAL INFORM PR
  • [10] [Anonymous], 2016, INT C MACH LEARN