Moving object detection in video satellite image based on deep learning

被引:10
作者
Zhang, Xueyang [1 ]
Xiang, Junhua [1 ]
机构
[1] Natl Univ Def Technol, Coll Aerosp Sci & Technol, 109 Deya Rd, Changsha 410073, Hunan, Peoples R China
来源
LIDAR IMAGING DETECTION AND TARGET RECOGNITION 2017 | 2017年 / 10605卷
关键词
object detection; moving object detection; deep learning; convolutional neural networks; video satellite; transfer learning;
D O I
10.1117/12.2296714
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Moving object detection in video satellite image is studied. A detection algorithm based on deep learning is proposed. The small scale characteristics of remote sensing video objects are analyzed. Firstly, background subtraction algorithm of adaptive Gauss mixture model is used to generate region proposals. Then the objects in region proposals are classified via the deep convolutional neural network. Thus moving objects of interest are detected combined with prior information of sub-satellite point. The deep convolution neural network employs a 21-layer residual convolutional neural network, and trains the network parameters by transfer learning. Experimental results about video from Tiantuo-2 satellite demonstrate the effectiveness of the algorithm.
引用
收藏
页数:8
相关论文
共 19 条
[1]  
[Anonymous], 2013, P 31 INT C MACHINE L
[2]  
[Anonymous], COMPUTER VISION PATT
[3]  
[Anonymous], 2014, EUR C COMP VIS
[4]  
Dai J., 2016, ADV NEURAL INFORM PR, V29, P379, DOI [DOI 10.1016/J.JPOWSOUR.2007.02.075, DOI 10.48550/ARXIV.1605.06409, DOI 10.1109/CVPR.2017.690]
[5]  
Girshick R., 2014, IEEE C COMP VIS PATT, DOI [DOI 10.1109/CVPR.2014.81, 10.1109/CVPR.2014.81]
[6]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448
[7]  
He K., 2016, P IEEE COMPUTER SOC, P770
[8]  
Joseph RK, 2016, CRIT POL ECON S ASIA, P1
[9]  
Krizhevsky A., 2009, LEARNING MULTIPLE LA
[10]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90