DrawerPipe: A Reconfigurable Packet Processing Pipeline for FPGA

被引:0
作者
Li J. [1 ]
Yang X. [1 ]
Sun Z. [1 ]
机构
[1] College of Computer, National University of Defense Technology, Changsha
来源
Jisuanji Yanjiu yu Fazhan/Computer Research and Development | 2018年 / 55卷 / 04期
基金
中国国家自然科学基金;
关键词
FPGA; Module reuse; Network functions acceleration; Programmable module interface; Reconfigurable pipeline model;
D O I
10.7544/issn1000-1239.2018.20170927
中图分类号
学科分类号
摘要
In public cloud, flexible network functions are required to enforce network isolation, service-level agreement and security for multi-tenants. While software-based network functions are flexible, they have limited capacity with low processing throughput and induce high latency. FPGA has good programmability and high processing throughput, and it is appealing due to the balance between hardware performance and software flexibility. However, how to use FPGA to realize network function lacks a unified and reconfigurable architecture. This paper presents DrawerPipe, a reconfigurable pipeline model. This module abstracts the packet processing into five standard "drawers". And operators can load their modules in these "drawers" which are combined as a packet processing pipeline. As the drawers are independent from each other, the modules loaded in different drawers can be excurted in parallel. Furthermore, we add a function-independent programmable interface between modules to adapt the communication format between different modules, which also helps to release the constraint imposed by the interface definition. Finally, we implement a variety of network functions based on DrawerPipe. The result shows that DrawerPipe not only has good scalability, but also has the advantages of wire-speed processing performance and high resource utilization, which can be used for rapid deployment of network functions. © 2018, Science Press. All right reserved.
引用
收藏
页码:717 / 728
页数:11
相关论文
共 46 条
  • [1] Li B., Tan K., Luo L., Et al., Clicknp: Highly flexible and high-performance network processing with reconfigurable hardware, Proc of the 2016 Conf on ACM SIGCOMM, pp. 1-14, (2016)
  • [2] Costa P., Migliavacca M., Pietzuch P., Et al., NaaS: Network-as-a-service in the cloud, Proc of the USENIX Conf on Hot Topics in Management of Internet, Cloud, and Enterprise Networks and Services, pp. 1-6, (2012)
  • [3] Benson T., Akella A., Shaikh A., Et al., CloudNaaS: A cloud networking platform for enterprise applications, Proc of the 2nd ACM Symp on Cloud Computing, pp. 8-16, (2011)
  • [4] Sherry J., Hasan S., Scott C., Et al., Making middleboxes someone else's problem: Network processing as a cloud service, ACM SIGCOMM Computer Communication Review, 42, 4, pp. 13-24, (2012)
  • [5] Li J., Humphrey M., Van Ingen C., Et al., eScience in the cloud: A MODIS satellite data reprojection and reduction pipeline in the windows Azure platform, Proc of the Int Symp on Parallel & Distributed Processing, pp. 367-376, (2010)
  • [6] Koponen T., Amidon K., Balland P., Et al., Network virtualization in multi-tenant datacenters, Proc of the 11th USENIX Conf on Networked Systems Design and Implementation, pp. 203-216, (2014)
  • [7] Martins J., Ahmed M., Raiciu C., Et al., ClickOS and the art of network function virtualization, Proc of the 11th USENIX Conf on Networked Systems Design and Implementation, pp. 459-473, (2014)
  • [8] Bando M., Chao H.J., FlashTrie: Hash-based prefix-compressed trie for IP route lookup beyond 100 Gbps, Proc of the 2010 IEEE Int Conf on Computer Communications(IEEE INFOCOM), pp. 821-829, (2010)
  • [9] Gandhi R., Liu H.H., Hu Y.C., Et al., Duet: Cloud scale load balancing with hardware and software, ACM SIGCOMM Computer Communication Review, 45, 4, pp. 27-38, (2015)
  • [10] Hong C.Y., Caesar M., Godfrey P., Finishing flows quickly with preemptive scheduling, ACM SIGCOMM Computer Communication Review, 42, 4, pp. 127-138, (2012)