An Overlay Accelerator of DeepLab CNN for Spacecraft Image Segmentation on FPGA

被引:4
作者
Guo, Zibo [1 ]
Liu, Kai [1 ]
Liu, Wei [2 ]
Sun, Xiaoyao [1 ]
Ding, Chongyang [1 ]
Li, Shangrong [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
[2] Smart Earth Key Lab, Beijing 100094, Peoples R China
基金
中国国家自然科学基金;
关键词
image semantic segmentation; instruction set architecture (ISA); field programmable gate array (FPGA); spacecraft component images; HIGH-THROUGHPUT; DESIGN;
D O I
10.3390/rs16050894
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Due to the absence of communication and coordination with external spacecraft, non-cooperative spacecraft present challenges for the servicing spacecraft in acquiring information about their pose and location. The accurate segmentation of non-cooperative spacecraft components in images is a crucial step in autonomously sensing the pose of non-cooperative spacecraft. This paper presents a novel overlay accelerator of DeepLab Convolutional Neural Networks (CNNs) for spacecraft image segmentation on a FPGA. First, several software-hardware co-design aspects are investigated: (1) A CNNs-domain COD instruction set (Control, Operation, Data Transfer) is presented based on a Load-Store architecture to enable the implementation of accelerator overlays. (2) An RTL-based prototype accelerator is developed for the COD instruction set. The accelerator incorporates dedicated units for instruction decoding and dispatch, scheduling, memory management, and operation execution. (3) A compiler is designed that leverages tiling and operation fusion techniques to optimize the execution of CNNs, generating binary instructions for the optimized operations. Our accelerator is implemented on a Xilinx Virtex-7 XC7VX690T FPGA at 200 MHz. Experiments demonstrate that with INT16 quantization our accelerator achieves an accuracy (mIoU) of 77.84%, experiencing only a 0.2% degradation compared to that of the original fully precision model, in accelerating the segmentation model of DeepLabv3+ ResNet18 on the spacecraft component images (SCIs) dataset. The accelerator boasts a performance of 184.19 GOPS/s and a computational efficiency (Runtime Throughput/Theoretical Roof Throughput) of 88.72%. Compared to previous work, our accelerator improves performance by 1.5x and computational efficiency by 43.93%, all while consuming similar hardware resources. Additionally, in terms of instruction encoding, our instructions reduce the size by 1.5x to 49x when compiling the same model compared to previous work.
引用
收藏
页数:26
相关论文
共 51 条
[1]   Review of deep learning: concepts, CNN architectures, challenges, applications, future directions [J].
Alzubaidi, Laith ;
Zhang, Jinglan ;
Humaidi, Amjad J. ;
Al-Dujaili, Ayad ;
Duan, Ye ;
Al-Shamma, Omran ;
Santamaria, J. ;
Fadhel, Mohammed A. ;
Al-Amidie, Muthana ;
Farhan, Laith .
JOURNAL OF BIG DATA, 2021, 8 (01)
[2]   RoadNet-RT: High Throughput CNN Architecture and SoC Design for Real-Time Road Segmentation [J].
Bai, Lin ;
Lyu, Yecheng ;
Huang, Xinming .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (02) :704-714
[3]   FlexCNN: An End-to-end Framework for Composing CNN Accelerators on FPGA [J].
Basalama, Suhail ;
Sohrabizadeh, Atefeh ;
Wang, Jie ;
Guo, Licheng ;
Cong, Jason .
ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2023, 16 (02)
[4]  
Black K, 2021, Arxiv, DOI arXiv:2101.09553
[5]   Machine learning classification of new asteroid families members [J].
Carruba, V ;
Aljbaae, S. ;
Domingos, R. C. ;
Lucchini, A. ;
Furlaneto, P. .
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, 2020, 496 (01) :540-549
[6]  
Chen LC, 2017, Arxiv, DOI [arXiv:1706.05587, 10.48550/arXiv.1706.05587]
[7]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[8]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[9]  
Cong J., 2018, P ACMESDAIEEE DESIGN, P1
[10]  
docs.xilinx, Vitis AI Library User Guide (UG1354)