Reinforcement Learning for Pass Detection and Generation of Possession Statistics in Soccer

被引:1
作者
Sarkar, Saikat [1 ]
Mukherjee, Dipti Prasad [2 ]
Chakrabarti, Amlan [1 ]
机构
[1] Univ Calcutta, AK Choudhury Sch Informat Technol, Kolkata 700073, India
[2] Indian Stat Inst, Elect & Commun Sci Unit, Kolkata 700108, India
关键词
Sports; Task analysis; Games; Reinforcement learning; Decision making; Training; Location awareness; Ball possession statistics; deep recurrent Q-network (DRQN); pass detection; reinforcement learning (RL); soccer;
D O I
10.1109/TCDS.2022.3194103
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a reinforcement learning (RL) based technique to detect passes from the video of a soccer match. The detection of passes determines ball possession statistics of a soccer match. A sequence of video frames is mapped to a sequence of states, such as ball with team A or team B or ball not possessed either by team A or B. The agent of RL learns the frame-to-state mapping and the optimal policy to decide the mapping task. We propose a novel reward function by utilizing contextual information of the soccer game in order to help the agent decide the optimal policy. In this context, the advantage of RL is in the integration of a reward system in choosing an action that maps a video frame of a soccer match to one of three possible states. Unlike competing methods, we design the RL model in a way so that explicit identification of team labels of players is not required. We introduce a deep recurrent $Q$ -network (DRQN) to learn the optimal policy. For efficient training of the DRQN, we have proposed decorrelated experience replay (DER), a strategy that selects important experiences based on the correlations of the experiences stored in the replay memory. Experimental results show that at least 5.75% and 2.1% better accuracy are achieved in calculating pass and possession statistics, respectively, compared to similar approaches.
引用
收藏
页码:914 / 924
页数:11
相关论文
共 32 条
[1]  
Achiam J, 2019, Arxiv, DOI arXiv:1903.08894
[2]  
Bellemare M., 2016, P ADV NEUR INF PROC, V29, P1
[3]   Active Object Localization with Deep Reinforcement Learning [J].
Caicedo, Juan C. ;
Lazebnik, Svetlana .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2488-2496
[4]   The Use of Match Statistics that Discriminate Between Successful and Unsuccessful Soccer Teams [J].
Castellano, Julen ;
Casamichana, David ;
Lago, Carlos .
JOURNAL OF HUMAN KINETICS, 2012, 31 :139-147
[5]  
Chollet F., KERAS, P201
[6]   On the Use of Deep Reinforcement Learning for Visual Tracking: A Survey [J].
Cruciata, Giorgio ;
Lo Presti, Liliana ;
La Cascia, Marco .
IEEE ACCESS, 2021, 9 :120880-120900
[7]  
Fan JQ, 2020, PR MACH LEARN RES, V120, P486
[8]  
Hausknecht M., 2015, 2015 AAAI FALL S SER
[9]  
Hessel M, 2018, AAAI CONF ARTIF INTE, P3215
[10]  
Jones PD., 2004, International Journal of Performance Analysis in Sport, V4, P98, DOI DOI 10.1080/24748668.2004.11868295