Pruning Deep Reinforcement Learning for Dual User Experience and Storage Lifetime Improvement on Mobile Devices

被引:10
作者
Wu, Chao [1 ]
Cui, Yufei [1 ]
Ji, Cheng [2 ]
Kuo, Tei-Wei [1 ]
Xue, Chun Jason [1 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
关键词
Log-structured file system (LFS); mobile device; multiobjective deep reinforcement learning (RL); neuron network pruning; segment cleaning; storage lifetime; user experience;
D O I
10.1109/TCAD.2020.3012804
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Background segment cleaning in log-structured file system has a significant impact on mobile devices. A low triggering frequency of the cleaning activity cannot reclaim enough free space for subsequent I/O, thus incurring foreground segment cleaning and impacting the user experience. In contrast, a high triggering frequency could generate excessive block migrations (BMs) and impair the storage lifetime. Prior works address this issue either by performance-biased solutions or incurring excessive memory overhead. In this article, a pruned reinforcement learning-based approach, MOBC, is proposed. Through learning the behaviors of I/O workloads and the statuses of logical address space, MOBC adaptively reduces the number of BMs and the number of triggered foreground segment cleanings. In order to integrate MOBC to resource-constraint mobile devices, a structured pruning method is proposed to reduce the time and space cost. The experimental results show that the pruned MOBC can reduce the worst case latency by 32.5%-68.6% at the 99.9th percentile, and improve the storage endurance by 24.3% over existing approaches, with significantly reduced overheads.
引用
收藏
页码:3993 / 4005
页数:13
相关论文
共 41 条
[1]  
Abels A., 2018, DYNAMIC WEIGHTS MULT
[2]  
[Anonymous], 2005, SAMS NAND FLASH DAT
[3]  
Blackwell Trevor., 1995, Proceedings of the USENIX 1995 Technical Conference Proceedings, TCON'95, P23
[4]  
Changman Lee, 2015, Proceedings of the 13th USENIX Conference on File and Storage Technologies. FAST '15, P273
[5]   Using behavioural Programming with Solver, Context, and Deep Reinforcement Learning for Playing a Simplified RoboCup-type Game [J].
Elyasaf, Achiya ;
Sadon, Aviran ;
Weiss, Gera ;
Yaacov, Tom .
2019 ACM/IEEE 22ND INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION (MODELS-C 2019), 2019, :243-251
[6]   Work-In-Progress: A Deep Learning Strategy for I/O Scheduling in Storage Systems [J].
Farhangi, Ashkan ;
Bian, Jiang ;
Wang, Jun ;
Guo, Zhishan .
2019 IEEE 40TH REAL-TIME SYSTEMS SYMPOSIUM (RTSS 2019), 2019, :568-571
[7]  
Guo Y., 2016, Advances in Neural Information Processing Systems, P1379
[8]  
Gwak Hyunho., 2015, 2015 International Symposium on Consumer Electronics (ISCE), P1
[9]  
Hahn SS, 2018, PROCEEDINGS OF THE 2018 USENIX ANNUAL TECHNICAL CONFERENCE, P15
[10]  
Hahn SS, 2017, 2017 USENIX ANNUAL TECHNICAL CONFERENCE (USENIX ATC '17), P759