LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Performance Improvement of Linux CPU Scheduler Using Policy Gradient Reinforcement Learning for Android Smartphones

Photo from wikipedia

The Energy Aware Scheduler (EAS) was developed and applied to the Linux kernel of recent Android smartphones in order to exploit the ARM big.LITTLE processing architecture efficiently. EAS organizes CPU… Click to show full abstract

The Energy Aware Scheduler (EAS) was developed and applied to the Linux kernel of recent Android smartphones in order to exploit the ARM big.LITTLE processing architecture efficiently. EAS organizes CPU hardware information into Energy Model which are used to improve CPU scheduling performance. In particular, it reduces power consumption and improves process scheduling performance. However, EAS has limitations in improving CPU scheduling performance, because the Energy Model configures the CPU hardware information to fixed values, which does not reflect the characteristics of running tasks, such as the workload changes and the transition between running state and sleep state. To solve this problem, this paper introduces the Learning Energy Aware Scheduler (Learning EAS). The Learning EAS adjusts the TARGET_LOAD used to set the CPU frequency and the sched_migration_cost used as the task migration criteria according to the characteristics of the running task through the policy gradient reinforcement learning. In LG G8 ThinQ, Learning EAS improved power consumption by 2.3% – 5.7%, hackbench results for process scheduling performance by 2.8% – 25.5%, applications entry time by 4.4% – 6.1%, and applications entry time under high CPU workload by 9.6% – 12.5%, respectively compared with EAS. This paper also showed that the Learning EAS is scalable by applying the Learning EAS to high-end and low-end chipset platforms of Qualcomm.Inc and MediaTek.Inc and improving power consumption by 2.8% – 7.8%, application entry time by 2.2% – 7.2%, respectively compared with EAS. Finally, this paper showed that the performance of CPU scheduling is improved gradually by the repetition of reinforcement learning.

Keywords: reinforcement learning; performance; cpu; scheduler; learning eas

Journal Title: IEEE Access
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.