Development of training image database using web crawling for vision-based site monitoring

被引:24
|
作者
Hwang, Jeongbin [1 ]
Kim, Jinwoo [2 ]
Chi, Seokho [1 ,3 ]
Seo, JoonOh [4 ]
机构
[1] Seoul Natl Univ, Dept Civil & Environm Engn, 1 Gwanak Ro, Seoul 08826, South Korea
[2] Univ Michigan, Dept Civil & Environm Engn, Ann Arbor, MI 48109 USA
[3] Seoul Natl Univ, Inst Construct & Environm Engn, 1 Gwanak Ro, Seoul 08826, South Korea
[4] Hong Kong Polytech Univ, Dept Bldg & Real Estate, Hung Hom, Kowloon, Room ZN737, Hong Kong, Peoples R China
基金
新加坡国家研究基金会;
关键词
Web crawling; Training image database; Construction site; Vision-based monitoring; Automated labeling; ACTION RECOGNITION; EARTHMOVING EXCAVATORS; CONSTRUCTION WORKERS; VISUAL RECOGNITION; NEURAL-NETWORKS; IDENTIFICATION; PRODUCTIVITY; EQUIPMENT; TRACKING; FEATURES;
D O I
10.1016/j.autcon.2022.104141
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
As most of the state-of-the-art technologies for vision-based monitoring were originated from machine learning or deep learning algorithms, it is crucial to build a large and rich training image database (DB). For this purpose, this paper proposes an automated framework that builds a large, high-quality training DB for construction site monitoring. The framework consists of three main processes: (1) automated construction image collection using web crawling, (2) automated image labeling using an image segmentation model, and (3) fully randomized foreground-background cross-oversampling. Using the developed framework, it was possible to automatically construct a training DB, composed of 5864 images, for the detection of construction objects in 53.5 min. The deep learning model trained by the DB successfully detected construction resources with an average precision of 92.71% and a recall rate of 88.14%. The findings of this study can reduce the time and effort required to develop vision-based site monitoring technologies.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Site-optimized training image database development using web-crawled and synthetic images
    Hwang, Jeongbin
    Kim, Junghoon
    Chi, Seokho
    AUTOMATION IN CONSTRUCTION, 2023, 151
  • [2] Towards database-free vision-based monitoring on construction sites: A deep active learning approach
    Kim, Jinwoo
    Hwang, Jeongbin
    Chi, Seokho
    Seo, JoonOh
    AUTOMATION IN CONSTRUCTION, 2020, 120 (120)
  • [3] Multi-camera vision-based productivity monitoring of earthmoving operations
    Kim, Jinwoo
    Chi, Seokho
    AUTOMATION IN CONSTRUCTION, 2020, 112
  • [4] A few-shot learning approach for database-free vision-based monitoring on construction sites
    Kim, Jinwoo
    Chi, Seokho
    AUTOMATION IN CONSTRUCTION, 2021, 124
  • [5] A critical review of vision-based occupational health and safety monitoring of construction site workers
    Zhang, Mingyuan
    Shi, Rui
    Yang, Zhen
    SAFETY SCIENCE, 2020, 126
  • [6] Vision-Based Framework for Intelligent Monitoring of Hardhat Wearing on Construction Sites
    Mneymneh, Bahaa Eddine
    Abbas, Mohamad
    Khoury, Hiam
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2019, 33 (02)
  • [7] Vision-based method for semantic information extraction in construction by integrating deep learning object detection and image captioning
    Wang, Yiheng
    Xiao, Bo
    Bouferguene, Ahmed
    Al-Hussein, Mohamed
    Li, Heng
    ADVANCED ENGINEERING INFORMATICS, 2022, 53
  • [8] Vision-based monitoring of intersections
    Veeraraghavan, H
    Masoud, O
    Papanikolopoulos, N
    IEEE 5TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, PROCEEDINGS, 2002, : 7 - 12
  • [9] Vision-Based Productivity Monitoring of Tower Crane Operations during Curtain Wall Installation Using a Database-Free Approach
    Jeong, Insoo
    Hwang, Jeongbin
    Kim, Junghoon
    Chi, Seokho
    Hwang, Bon-Gang
    Kim, Jinwoo
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2023, 37 (04)
  • [10] Vision-based surveillance system for monitoring traffic conditions
    Park, Man-Woo
    Kim, Jung In
    Lee, Young-Joo
    Park, Jinwoo
    Suh, Wonho
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (23) : 25343 - 25367