A methodology for semantic action recognition based on pose and human-object interaction in avocado harvesting processes

被引:12
作者
Vasconez, J. P. [1 ]
Admoni, H. [2 ]
Auat Cheein, F. [3 ]
机构
[1] Escuela Politec Nacl, Artificial Intelligence & Comp Vis Res Lab, Quito 170517, Ecuador
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Univ Tecn Federico Santa Maria, Dept Elect Engn, Valparaiso, Chile
关键词
Semantic human action recognition; Human-object interaction; Avocado harvesting process; Human?machine collaboration; AGRICULTURE; PRODUCTS;
D O I
10.1016/j.compag.2021.106057
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
The agricultural industry could greatly benefit from an intelligent system capable of supporting field workers to increase production. Such a system would need to monitor human workers, their current actions, their intentions, and possible future actions, which are the focus of this work. Herein, we propose and validate a methodology to recognize human actions during the avocado harvesting process in a Chilean farm based on combined object-pose semantic information using RGB still images. We use Faster R-CNN ?Region Convolutional Neural Network? with Inception V2 convolutional object detection to recognize 17 categories, which include among others, field workers, tools, crops, and vehicles. Then, we use a convolutional-based 2D pose estimation method called OpenPose to detect 18 human skeleton joints. Both the object and the pose features are processed, normalized, and combined into a single feature vector. We test four classifiers ?Support vector machine, Decision trees, KNearest-Neighbour, and Bagged trees? on the combined object-pose feature vectors to evaluate action classification performance. We also test such results using principal component analysis on the four classifiers to reduce dimensionality. Accuracy and inference time are analyzed for all the classifiers using 10 action categories, related to the avocado harvesting process. The results show that it is possible to detect human actions during harvesting, obtaining average accuracy performances (among all action categories) ranging from 57% to 99%, depending on the classifier used. The latter can be used to support an intelligent system, such as robots, interacting with field workers aimed at increasing productivity.
引用
收藏
页数:12
相关论文
共 47 条
[1]   Applying the machine repair model to improve efficiency of harvesting fruit [J].
Ampatzidis, Yiannis G. ;
Vougioukas, Stavros G. ;
Whiting, Matthew D. ;
Zhang, Qin .
BIOSYSTEMS ENGINEERING, 2014, 120 :25-33
[2]  
[Anonymous], 2017, APPL COMPUT INTELL S
[3]  
[Anonymous], 2018, 2018 ASABE ANN INT M
[4]  
Cheein FA, 2015, 2015 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), P289, DOI 10.1109/ICIT.2015.7125113
[5]  
Bonnechere B., 2012, Proc 9th Intl Conf. Disability, Virutal Reality Associated Technologies, Laval, P287
[6]   Inspection and grading of agricultural and food products by computer vision systems - a review [J].
Brosnan, T ;
Sun, DW .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2002, 36 (2-3) :193-213
[7]   OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields [J].
Cao, Zhe ;
Hidalgo, Gines ;
Simon, Tomas ;
Wei, Shih-En ;
Sheikh, Yaser .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (01) :172-186
[8]   Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields [J].
Cao, Zhe ;
Simon, Tomas ;
Wei, Shih-En ;
Sheikh, Yaser .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1302-1310
[9]  
Carvalho Catarina Pedro, 2014, Agron. colomb., V32, P399, DOI 10.15446/agron.colomb.v32n3.46031
[10]   A survey of depth and inertial sensor fusion for human action recognition [J].
Chen, Chen ;
Jafari, Roozbeh ;
Kehtarnavaz, Nasser .
MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (03) :4405-4425