Fully automated snow depth measurements from time-lapse images applying a convolutional neural network

被引:21
作者
Kopp, Matthias [1 ]
Tuo, Ye [1 ]
Disse, Markus [1 ]
机构
[1] Tech Univ Munich, Chair Hydrol & River Basin Management, Arcisstr 21, D-80333 Munich, Germany
关键词
Mask R-CNN; Instance segmentation; Snow depth; Image processing; Time-lapse camera; MOUNTAINOUS CATCHMENT; ALPINE TERRAIN; CLIMATE-CHANGE; CHANGE IMPACTS; PHOTOGRAPHY; SYSTEM; COVER;
D O I
10.1016/j.scitotenv.2019.134213
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Time-lapse cameras in combination with simple measuring rods can form a highly reliable low-cost sensor network monitoring snow depth in a high spatial and temporal resolution. Depending on the number of cameras and the temporal recording resolution, such a network produces large sets of image time series. In order to extract the snow depth time series from these collections of images in acceptable time, automated processing methods have to be applied. Besides classic image processing based on edge detection methods, there are nowadays ready-to-use convolutional neural network frameworks like Mask R-CNN that facilitate instance segmentation and thus allow for fully automated snow depth measurements from images using a detectable measuring rod. This study investigates the applicability of Mask R-CNN embedded in a newly developed work flow for snow depth measurements. The new method is compared to an automated image processing method carried out utilizing functionalities provided by the OpenCV library. The quality of both methods was assessed with the inclusion of manual evaluations of the image series. As a result, the newly introduced work flow outperforms the present classic image processing method in regard to stability, accuracy and portability. By applying the Mask R-CNN framework, the overall RMSE of two considered time series is reduced to approximately 20% of the value produced by means of the classic image processing approach. Moreover, the ratio of values within five centimeter deviation from the reference value was increased from 75% to 88% on average. Since no parameters have to be adjusted, the Mask R-CNN framework is able to detect known shapes reliably in almost any environment, making the presented method highly flexible. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 46 条
[1]   Brief communication: Rapid machine-learning-based extraction and measurement of ice wedge polygons in high-resolution digital elevation models [J].
Abolt, Charles J. ;
Young, Michael H. ;
Atchley, Adam L. ;
Wilson, Cathy J. .
CRYOSPHERE, 2019, 13 (01) :237-245
[2]   An assessment of the possible impacts of climate change on snow and peak river flows across Britain [J].
Bell, V. A. ;
Kay, A. L. ;
Davies, H. N. ;
Jones, R. G. .
CLIMATIC CHANGE, 2016, 136 (3-4) :539-553
[3]   Comparison of Snow Data Assimilation System with GPS reflectometry snow depth in the Western United States [J].
Boniface, K. ;
Braun, J. J. ;
McCreight, J. L. ;
Nievinski, F. G. .
HYDROLOGICAL PROCESSES, 2015, 29 (10) :2425-2437
[4]  
Bradski G, 2000, DR DOBBS J, V25, P120
[5]   Analysis of ground-measured and passive-microwave-derived snow depth variations in midwinter across the northern Great Plains [J].
Chang, ATC ;
Kelly, REJ ;
Josberger, EG ;
Armstrong, RL ;
Foster, JL ;
Mognard, NM .
JOURNAL OF HYDROMETEOROLOGY, 2005, 6 (01) :20-33
[6]   Scalable flood level trend monitoring with surveillance cameras using a deep convolutional neural network [J].
de Vitry, Matthew Moy ;
Kramer, Simon ;
Wegner, Jan Dirk ;
Leitao, Joao P. .
HYDROLOGY AND EARTH SYSTEM SCIENCES, 2019, 23 (11) :4621-4634
[7]   Lidar measurement of snow depth: a review [J].
Deems, Jeffrey S. ;
Painter, Thomas H. ;
Finnegan, David C. .
JOURNAL OF GLACIOLOGY, 2013, 59 (215) :467-479
[8]   Fractal distribution of snow depth from lidar data [J].
Deems, JS ;
Fassnacht, SR ;
Elder, KJ .
JOURNAL OF HYDROMETEOROLOGY, 2006, 7 (02) :285-297
[9]   Snow process monitoring in montane forests with time-lapse photography [J].
Dong, Chunyu ;
Menzel, Lucas .
HYDROLOGICAL PROCESSES, 2017, 31 (16) :2872-2886
[10]  
Dutta A., 2016, VGG Image Annotator (VIA)