Performance Improvement of Linux CPU Scheduler Using Policy Gradient Reinforcement Learning for Android Smartphones

被引:9
作者
Han, Junyeong [1 ]
Lee, Sungyoung [1 ]
机构
[1] LG Elect, Seoul 07336, South Korea
关键词
ARM big; LITTLE processing architecture; energy aware scheduler; process scheduler; CPU frequency governor; reinforcement learning; policy gradient; neural network; power consumption;
D O I
10.1109/ACCESS.2020.2965548
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Energy Aware Scheduler (EAS) was developed and applied to the Linux kernel of recent Android smartphones in order to exploit the ARM big.LITTLE processing architecture efficiently. EAS organizes CPU hardware information into Energy Model which are used to improve CPU scheduling performance. In particular, it reduces power consumption and improves process scheduling performance. However, EAS has limitations in improving CPU scheduling performance, because the Energy Model configures the CPU hardware information to fixed values, which does not reflect the characteristics of running tasks, such as the workload changes and the transition between running state and sleep state. To solve this problem, this paper introduces the Learning Energy Aware Scheduler (Learning EAS). The Learning EAS adjusts the TARGET_LOAD used to set the CPU frequency and the sched_migration_cost used as the task migration criteria according to the characteristics of the running task through the policy gradient reinforcement learning. In LG G8 ThinQ, Learning EAS improved power consumption by 2.3% - 5.7%, hackbench results for process scheduling performance by 2.8% - 25.5%, applications entry time by 4.4% - 6.1%, and applications entry time under high CPU workload by 9.6% - 12.5%, respectively compared with EAS. This paper also showed that the Learning EAS is scalable by applying the Learning EAS to high-end and low-end chipset platforms of Qualcomm.Inc and MediaTek.Inc and improving power consumption by 2.8% - 7.8%, application entry time by 2.2% - 7.2%, respectively compared with EAS. Finally, this paper showed that the performance of CPU scheduling is improved gradually by the repetition of reinforcement learning.
引用
收藏
页码:11031 / 11045
页数:15
相关论文
共 23 条
[1]  
ARM Limited, 2018, ARM ARCH REF MAN ARM
[2]  
ARM Ltd. Cambridge U.K., 2019, CORT A53
[3]  
ARM Ltd. Cambridge U.K., 2019, EN AW SCHED EAS
[4]  
ARM Ltd. Cambridge U.K., 2019, CORT A7
[5]  
ARM Ltd. Cambridge U.K., 2019, CORT A15
[6]  
ARM Ltd. Cambridge U.K., 2019, CORT A73
[7]  
Balint C., CPUFREQ INTERACTIVE
[8]  
Brodowski D., CPU FREQUENCY VOLTAG
[9]   CPU Scheduling for Power/Energy Management on Multicore Processors Using Cache Miss and Context Switch Data [J].
Datta, Ajoy K. ;
Patel, Rajesh .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2014, 25 (05) :1190-1199
[10]   Perf& Fair: A Progress-Aware Scheduler to Enhance Performance and Fairness in SMT Multicores [J].
Feliu, Josue ;
Sahuquillo, Julio ;
Petit, Salvador ;
Duato, Jose .
IEEE TRANSACTIONS ON COMPUTERS, 2017, 66 (05) :905-911