Surveillance video analysis for student action recognition and localization inside computer laboratories of a smart campus

被引:26
作者
Rashmi, M. [1 ]
Ashwin, T. S. [1 ]
Guddeti, Ram Mohana Reddy [1 ]
机构
[1] Natl Inst Technol Karnataka Surathkal, Dept Informat Technol, Mangalore 575025, India
关键词
Human action recognition; Smart campus; Object detection; Object localization; Neural networks; Computer enabled laboratories; ENGAGEMENT; AGREEMENT; RETRIEVAL;
D O I
10.1007/s11042-020-09741-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the era of smart campus, unobtrusive methods for students' monitoring is a challenging task. The monitoring system must have the ability to recognize and detect the actions performed by the students. Recently many deep neural network based approaches have been proposed to automate Human Action Recognition (HAR) in different domains, but these are not explored in learning environments. HAR can be used in classrooms, laboratories, and libraries to make the teaching-learning process more effective. To make the learning process more effective in computer laboratories, in this study, we proposed a system for recognition and localization of student actions from still images extracted from (Closed Circuit Television) CCTV videos. The proposed method uses (You Only Look Once) YOLOv3, state-of-the-art real-time object detection technology, for localization, recognition of students' actions. Further, the image template matching method is used to decrease the number of image frames and thus processing the video quickly. As actions performed by the humans are domain specific and since no standard dataset is available for students' action recognition in smart computer laboratories, thus we created the STUDENT ACTION dataset using the image frames obtained from the CCTV cameras placed in the computer laboratory of a university campus. The proposed method recognizes various actions performed by students in different locations within an image frame. It shows excellent performance in identifying the actions with more samples compared to actions with fewer samples.
引用
收藏
页码:2907 / 2929
页数:23
相关论文
共 49 条
  • [1] Unobtrusive Behavioral Analysis of Students in Classroom Environment Using Non-Verbal Cues
    Ashwin, T. S.
    Guddeti, Ram Mohana Reddy
    [J]. IEEE ACCESS, 2019, 7 : 150693 - 150709
  • [2] Automatic detection of students' affective states in classroom environment using hybrid convolutional neural networks
    Ashwin, T. S.
    Guddeti, Ram Mohana Reddy
    [J]. EDUCATION AND INFORMATION TECHNOLOGIES, 2020, 25 (02) : 1387 - 1415
  • [3] Baziyad M, 2018, IEEE INT CONF INNOV, P1, DOI 10.1109/INNOVATIONS.2018.8606008
  • [4] Spontaneous facial expression database for academic emotion inference in online learning
    Bian, Cunling
    Zhang, Ya
    Yang, Fei
    Bi, Wei
    Lu, Weigang
    [J]. IET COMPUTER VISION, 2019, 13 (03) : 329 - 337
  • [5] Automatic Detection of Mind Wandering from Video in the Lab and in the Classroom
    Bosch, Nigel
    D'Mello, Sidney K.
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2021, 12 (04) : 974 - 988
  • [6] Brownlee J, USE ROC CURVES PRECI
  • [7] Candra Kirana Kartika, 2018, 2018 3rd International Seminar on Application for Technology of Information and Communication. Proceedings, P406, DOI 10.1109/ISEMANTIC.2018.8549735
  • [8] Carlueho J, 2018, IEEE INT C INT ROBOT, P2336, DOI 10.1109/IROS.2018.8594067
  • [9] Retrieval in Long-Surveillance Videos Using User-Described Motion and Object Attributes
    Castanon, Gregory
    Elgharib, Mohamed
    Saligrama, Venkatesh
    Jodoin, Pierre-Marc
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2016, 26 (12) : 2313 - 2327
  • [10] Design of an Augmented Reality Component from the Theory of Agents for Smart Classrooms
    Chamba, L.
    Aguilar, J.
    [J]. IEEE LATIN AMERICA TRANSACTIONS, 2016, 14 (08) : 3826 - 3837