Patchwork: A Patch-wise Attention Network for Efficient Object Detection and Segmentation in Video Streams

被引:21
作者
Chai, Yuning [1 ,2 ]
机构
[1] Google Inc, Mountain View, CA 94043 USA
[2] Waymo LLC, Mountain View, CA 94043 USA
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) | 2019年
关键词
D O I
10.1109/ICCV.2019.00351
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in single-frame object detection and segmentation techniques have motivated a wide range of works to extend these methods to process video streams. In this paper, we explore the idea of hard attention aimed for latency-sensitive applications. Instead of reasoning about every frame separately, our method selects and only processes a small sub-window of the frame. Our technique then makes predictions for the full frame based on the sub-windows from previous frames and the update from the current sub-window. The latency reduction by this hard attention mechanism comes at the cost of degraded accuracy. We made two contributions to address this. First, we propose a specialized memory cell that recovers lost context when processing sub-windows. Secondly, we adopt a Q-learning-based policy training strategy that enables our approach to intelligently select the sub-windows such that the staleness in the memory hurts the performance the least. Our experiments suggest that our approach reduces the latency by approximately four times without significantly sacrificing the accuracy on the ImageNet VID video object detection dataset and the DAVIS video object segmentation dataset. We further demonstrate that we can reinvest the saved computation into other parts of the network, and thus resulting in an accuracy increase at a comparable computational cost as the original system and beating other recently proposed state-of-the-art methods in the low latency range.
引用
收藏
页码:3414 / 3423
页数:10
相关论文
共 45 条
[1]  
[Anonymous], 2015, ICLR
[2]  
[Anonymous], 2018, CVPR
[3]  
[Anonymous], 2018, P IEEE C COMP VIS PA
[4]  
[Anonymous], 2016, AAAI
[5]  
[Anonymous], 2016, CVPR
[6]  
[Anonymous], 2018, PAMI
[7]  
[Anonymous], 2017, CVPR
[8]  
[Anonymous], 2017, ICCV
[9]  
[Anonymous], 2015, ICML
[10]  
[Anonymous], 2018, ABS180602877 ARXIV