As datasets grow, high performance computing has gradually become an important tool for artificial intelligence, particularly due to the powerful and efficient parallel computing provided by GPUs. However, it has… Click to show full abstract
As datasets grow, high performance computing has gradually become an important tool for artificial intelligence, particularly due to the powerful and efficient parallel computing provided by GPUs. However, it has been a general concern that the rising performance of GPUs usually consumes high power. In this work, we investigate the study of evaluating the power consumption of AMD’s integrated GPU (iGPU). Particularly, by adopting the linear regression method on the collecting data of performance counters, we model the power of iGPU using real hardware measurements. Unfortunately, the profiling tool CodeXL cannot be straightforwardly used for sampling power data and as a countermeasure we propose a mechanism called kernel extension to enable the system data sampling for model evaluation. Experimental results indicate that the median absolute error of our model is less than 3%. Furthermore, we simplify our statistical model for lower latency without significantly reducing the accuracy and stability.
               
Click one of the above tabs to view related content.