Recognizing Personal Locations From Egocentric Videos

被引:28
|
作者
Furnari, Antonino [1 ]
Farinella, Giovanni Maria [1 ]
Battiato, Sebastiano [1 ]
机构
[1] Univ Catania, Dept Math & Comp Sci, I-95124 Catania, Italy
关键词
Context-aware computing; egocentric dataset; egocentric vision; first person vision; personal location recognition; CONTEXT; CLASSIFICATION; RECOGNITION; SCENE; SHAPE;
D O I
10.1109/THMS.2016.2612002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contextual awareness in wearable computing allows for construction of intelligent systems, which are able to interact with the user in a more natural way. In this paper, we study how personal locations arising from the user's daily activities can be recognized from egocentric videos. We assume that few training samples are available for learning purposes. Considering the diversity of the devices available on the market, we introduce a benchmark dataset containing egocentric videos of eight personal locations acquired by a user with four different wearable cameras. To make our analysis useful in real-world scenarios, we propose a method to reject negative locations, i.e., those not belonging to any of the categories of interest for the end-user. We assess the performances of the main state-of-the-art representations for scene and object classification on the considered task, as well as the influence of device-specific factors such as the field of view and the wearing modality. Concerning the different device-specific factors, experiments revealed that the best results are obtained using a head-mounted wide-angular device. Our analysis shows the effectiveness of using representations based on convolutional neural networks, employing basic transfer learning techniques and an entropy-based rejection algorithm.
引用
收藏
页码:6 / 18
页数:13
相关论文
共 50 条
  • [1] Recognizing Personal Contexts from Egocentric Images
    Furnari, Antonino
    Farinella, Giovanni M.
    Battiato, Sebastiano
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP (ICCVW), 2015, : 393 - 401
  • [2] Market basket analysis from egocentric videos
    Santarcangelo, Vito
    Farinella, Giovanni Maria
    Furnari, Antonino
    Battiato, Sebastiano
    PATTERN RECOGNITION LETTERS, 2018, 112 : 83 - 90
  • [3] Personal-location-based temporal segmentation of egocentric videos for lifelogging applications
    Furnari, Antonino
    Battiato, Sebastiano
    Farinella, Giovanni Maria
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 52 : 1 - 12
  • [4] Recognizing Activities of Daily Living from Egocentric Images
    Cartas, Alejandro
    Marin, Juan
    Radeva, Petia
    Dimiccoli, Mariella
    PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2017), 2017, 10255 : 87 - 95
  • [5] Organizing egocentric videos of daily living activities
    Ortis, Alessandro
    Farinella, Giovanni M.
    D'Amico, Valeria
    Addesso, Luca
    Torrisi, Giovanni
    Battiato, Sebastiano
    PATTERN RECOGNITION, 2017, 72 : 207 - 218
  • [6] Left/right hand segmentation in egocentric videos
    Betancourt, Alejandro
    Morerio, Pietro
    Barakova, Emilia
    Marcenaro, Lucio
    Rauterberg, Matthias
    Regazzoni, Carlo
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 154 : 73 - 81
  • [7] AMEGO: Active Memory from Long EGOcentric Videos
    Goletto, Gabriele
    Nagarajan, Tushar
    Averta, Giuseppe
    Damen, Dima
    COMPUTER VISION - ECCV 2024, PT XIII, 2025, 15071 : 92 - 110
  • [8] Detecting Hands in Egocentric Videos: Towards Action Recognition
    Cartas, Alejandro
    Dimiccoli, Mariella
    Radeva, Petia
    COMPUTER AIDED SYSTEMS THEORY - EUROCAST 2017, PT II, 2018, 10672 : 330 - 338
  • [9] Next-active-object prediction from egocentric videos
    Furnari, Antonino
    Battiato, Sebastiano
    Grauman, Kristen
    Farinella, Giovanni Maria
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 49 : 401 - 411
  • [10] Summarization of Egocentric Videos: A Comprehensive Survey
    del Molino, Ana Garcia
    Tan, Cheston
    Lim, Joo-Hwee
    Tan, Ah-Hwee
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2017, 47 (01) : 65 - 76