Efficient Deep CNN-Based Fire Detection and Localization in Video Surveillance Applications

被引:321
作者
Muhammad, Khan [1 ]
Ahmad, Jamil [1 ]
Lv, Zhihan [2 ]
Bellavista, Paolo [3 ]
Yang, Po [4 ]
Baik, Sung Wook [1 ]
机构
[1] Sejong Univ, Digital Contents Res Inst, Intelligent Media Lab, Seoul 143747, South Korea
[2] Qingdao Univ, Sch Data Sci & Software Engn, Qingdao 266071, Shandong, Peoples R China
[3] Univ Bologna, Dept Comp Sci & Engn, I-40126 Bologna, Italy
[4] Liverpool John Moores Univ, Sch Comp Sci, Liverpool L3 5UA, Merseyside, England
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2019年 / 49卷 / 07期
基金
新加坡国家研究基金会;
关键词
Convolutional neural networks (CNNs); deep learning; fire detection; fire disaster; fire localization; image classification; surveillance networks; CONVOLUTIONAL NEURAL-NETWORKS; REAL-TIME FIRE; FLAME DETECTION; COLOR; FRAMEWORK; COMBINATION;
D O I
10.1109/TSMC.2018.2830099
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks (CNNs) have yielded state-of-the-art performance in image classification and other computer vision tasks. Their application in fire detection systems will substantially improve detection accuracy, which will eventually minimize fire disasters and reduce the ecological and social ramifications. However, the major concern with CNN-based fire detection systems is their implementation in real-world surveillance networks, due to their high memory and computational requirements for inference. In this paper, we propose an original, energy-friendly, and computationally efficient CNN architecture, inspired by the SqueezeNet architecture for fire detection, localization, and semantic understanding of the scene of the fire. It uses smaller convolutional kernels and contains no dense, fully connected layers, which helps keep the computational requirements to a minimum. Despite its low computational needs, the experimental results demonstrate that our proposed solution achieves accuracies that are comparable to other, more complex models, mainly due to its increased depth. Moreover, this paper shows how a tradeoff can be reached between fire detection accuracy and efficiency, by considering the specific characteristics of the problem of interest and the variety of fire data.
引用
收藏
页码:1419 / 1434
页数:16
相关论文
共 45 条
[1]   Object-oriented convolutional features for fine-grained image retrieval in large surveillance datasets [J].
Ahmad, Jamil ;
Muhammad, Khan ;
Bakshi, Sambit ;
Baik, Sung Wook .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2018, 81 :314-330
[2]  
[Anonymous], 2016, ARXIV160207360
[3]   A Probabilistic Approach for Vision-Based Fire Detection in Videos [J].
Borges, Paulo Vinicius Koerich ;
Izquierdo, Ebroul .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2010, 20 (05) :721-731
[4]   Fire detection using statistical color model in video sequences [J].
Celik, Turgay ;
Demirel, Hasan ;
Ozkaramanli, Huseyin ;
Uyguroglu, Mustafa .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2007, 18 (02) :176-185
[5]   Fire detection in video sequences using a generic color model [J].
Celik, Turgay ;
Demirel, Hasan .
FIRE SAFETY JOURNAL, 2009, 44 (02) :147-158
[6]   PCANet: A Simple Deep Learning Baseline for Image Classification? [J].
Chan, Tsung-Han ;
Jia, Kui ;
Gao, Shenghua ;
Lu, Jiwen ;
Zeng, Zinan ;
Ma, Yi .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) :5017-5032
[7]  
Chen TH, 2004, IEEE IMAGE PROC, P1707
[8]   Real-time multi-feature based fire flame detection in video [J].
Chi, Rui ;
Lu, Zhe-Ming ;
Ji, Qing-Ge .
IET IMAGE PROCESSING, 2017, 11 (01) :31-37
[9]   BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis [J].
Chino, Daniel Y. T. ;
Avalhais, Letricia P. S. ;
Rodrigues, Jose F., Jr. ;
Traina, Agma J. M. .
2015 28TH SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES, 2015, :95-102
[10]   Improving Fire Detection Reliability by a Combination of Videoanalytics [J].
Di Lascio, Rosario ;
Greco, Antonio ;
Saggese, Alessia ;
Vento, Mario .
IMAGE ANALYSIS AND RECOGNITION, ICIAR 2014, PT I, 2014, 8814 :477-484