A Lightweight Deep Learning Architecture for Vegetation Segmentation using UAV-captured Aerial Images

被引:25
作者
Behera, Tanmay Kumar [1 ]
Bakshi, Sambit [1 ]
Sa, Pankaj Kumar [1 ]
机构
[1] Natl Inst Technol Rourkela, Dept Comp Sci & Engn, Rourkela 769008, Odisha, India
关键词
Remote sensing; Semantic segmentation; Depthwise separable convolution; Deep learning; Convolutional neural network (CNN); Urban mapping; Vegetation segmentation; Unmanned aerial vehicle (UAV); ENVIRONMENT; NETWORK;
D O I
10.1016/j.suscom.2022.100841
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The unmanned aerial vehicle (UAV)-captured panoptic remote sensing images have great potential to promote robotics-inspired intelligent solutions for land cover mapping, disaster management, smart agriculture through automatic vegetation detection, and real-time environmental surveillance. However, many of these applications require faster execution of the tasks to get the job done in real-time. In this regard, this article proposes a lightweight convolutional neural network (CNN) architecture, also known as LW-AerialSegNet, that helps preserve the network's feed-forward nature by increasing the intermediate layers to gather more crucial features for the segmentation task. Moreover, the network uses the concept of densely connected architecture and depth-wise separable convolution mechanisms to reduce the number of parameters of the model that can be deployed in the internet of things (IoT) edge devices to perform real-time segmentation. A UAV-based image segmentation dataset NITRDrone Dataset and Urban Drone dataset (UDD) are used to evaluate the proposed architecture. It has achieved an intersection over union (IoU) of 82% and 71% on the NITRDrone datasets UDD, respectively, thereby illustrating its superiority among the considered state-of-the-art mechanisms. The experimental results indicate that the implementation of depth-wise separable convolutions helps reduce the number of trainable parameters significantly, making it suitable to be applied on edge-computing devices at a smaller scale. The proposed architecture can be deployed in real-life settings on a UAV to extract objects such as vegetation and road lines, hence can be used in mapping urban areas, agricultural lands, etc.
引用
收藏
页数:11
相关论文
共 53 条
[31]   Test and Analysis of Vegetation Coverage in Open-Pit Phosphate Mining Area around Dianchi Lake Using UAV-VDVI [J].
Luo, Weidong ;
Gan, Shu ;
Yuan, Xiping ;
Gao, Sha ;
Bi, Rui ;
Hu, Lin .
SENSORS, 2022, 22 (17)
[32]  
McGlinchy J, 2019, INT GEOSCI REMOTE SE, P3915, DOI [10.1109/IGARSS.2019.8900453, 10.1109/igarss.2019.8900453]
[33]   Deep learning-based object detection in low-altitude UAV datasets: A survey [J].
Mittal, Payal ;
Singh, Raman ;
Sharma, Akashdeep .
IMAGE AND VISION COMPUTING, 2020, 104 (104)
[34]   Multi-stage cascaded deconvolution for depth map and surface normal prediction from single image [J].
Padhy, Ram Prasad ;
Chang, Xiaojun ;
Choudhury, Suman Kumar ;
Sa, Pankaj Kumar ;
Bakshi, Sambit .
PATTERN RECOGNITION LETTERS, 2019, 127 :165-173
[35]   An intelligent system for crop identification and classification from UAV images using conjugated dense convolutional neural network [J].
Pandey, Akshay ;
Jain, Kamal .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2022, 192
[36]   U-Net: Convolutional Networks for Biomedical Image Segmentation [J].
Ronneberger, Olaf ;
Fischer, Philipp ;
Brox, Thomas .
MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, PT III, 2015, 9351 :234-241
[37]  
Roy Neelabhro, 2020, 2020 7th International Conference on Signal Processing and Integrated Networks (SPIN), P914, DOI 10.1109/SPIN48934.2020.9071078
[38]   ImageNet Large Scale Visual Recognition Challenge [J].
Russakovsky, Olga ;
Deng, Jia ;
Su, Hao ;
Krause, Jonathan ;
Satheesh, Sanjeev ;
Ma, Sean ;
Huang, Zhiheng ;
Karpathy, Andrej ;
Khosla, Aditya ;
Bernstein, Michael ;
Berg, Alexander C. ;
Fei-Fei, Li .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 115 (03) :211-252
[39]   6G Enabled Unmanned Aerial Vehicle Traffic Management: A Perspective [J].
Shrestha, Rakesh ;
Bajracharya, Rojeena ;
Kim, Shiho .
IEEE ACCESS, 2021, 9 :91119-91136
[40]  
Srivastava R.K., 2015, CORR, DOI [DOI 10.48550/ARXIV.1505.00387, 10.48550/arXiv.1505.00387]