Drone-Based Public Surveillance Using 3D Point Clouds and Neuro-Fuzzy Classifier

被引:0
作者
Abbas, Yawar [1 ]
Alarfaj, Aisha Ahmed [2 ]
Alabdulqader, Ebtisam Abdullah [3 ]
Algarni, Asaad [4 ]
Jalal, Ahmad [1 ,5 ]
Liu, Hui [6 ]
机构
[1] Air Univ, Fac Comp & AI, Islamabad 44000, Pakistan
[2] Princess Nourah Bint Abdulrahman Univ, Coll Comp & Informat Sci, Dept Informat Syst, Riyadh 11671, Saudi Arabia
[3] King Saud Univ, Coll Comp & Informat Sci, Dept Informat Technol, Riyadh 12372, Saudi Arabia
[4] Northern Border Univ, Fac Comp & Informat Technol, Dept Comp Sci, Rafha 91911, Saudi Arabia
[5] Korea Univ, Coll Informat, Dept Comp Sci & Engn, Seoul 02841, South Korea
[6] Univ Bremen, Cognit Syst Lab, D-28359 Bremen, Germany
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2025年 / 82卷 / 03期
关键词
Activity recognition; geodesic distance; pattern recognition; neuro fuzzy classifier; ACTION RECOGNITION;
D O I
10.32604/cmc.2025.05922
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human Activity Recognition (HAR) in drone-captured videos has become popular because of the interest in various fields such as video surveillance, sports analysis, and human-robot interaction. However, recognizing actions from such videos poses the following challenges: variations of human motion, the complexity of backdrops, motion blurs, occlusions, and restricted camera angles. This research presents a human activity recognition system to address these challenges by working with drones' red-green-blue (RGB) videos. The first step in the proposed system involves partitioning videos into frames and then using bilateral filtering to improve the quality of object foregrounds while reducing background interference before converting from RGB to grayscale images. The YOLO (You Only Look Once) algorithm detects and extracts humans from each frame, obtaining their skeletons for further processing. The joint angles, displacement and velocity, histogram of oriented gradients (HOG), 3D points, and geodesic Distance are included. These features are optimized using Quadratic Discriminant Analysis (QDA) and utilized in a Neuro-Fuzzy Classifier (NFC) for activity classification. Real-world evaluations on the Drone-Action, Unmanned Aerial Vehicle (UAV)-Gesture, and Okutama-Action datasets substantiate the proposed system's superiority in accuracy rates over existing methods. In particular, the system obtains recognition rates of 93% for drone action, 97% for UAV gestures, and 81% for Okutama-action, demonstrating the system's reliability and ability to learn human activity from drone videos.
引用
收藏
页码:4759 / 4776
页数:18
相关论文
empty
未找到相关数据