Graphics Processing Units (GPUs) have been employed in embedded systems to handle increased amounts of computation and to satisfy the timing requirement. Due to the small feature size, chip aging… Click to show full abstract
Graphics Processing Units (GPUs) have been employed in embedded systems to handle increased amounts of computation and to satisfy the timing requirement. Due to the small feature size, chip aging and within-die parameter variations have been considered to be among the challenging problems for state-of-the-art processors, including GPUs. In order to deal with the process variation, several processors use chip-level guardbanding, which uses the lowest operating frequency that results in a significant chip-level performance drop. Other processors improve their performance efficiency through core-level guardbanding that may use a different operating frequency for each core. Existing aging management techniques are based on the chip-level guardbanding, which assigns the same number of instructions to the cores that have the same aging status. However, in the presence of the process variation, existing aging management techniques have a limitation in minimizing the aging effect because each core has a different amount of stress for the same number of instructions. In order to tackle this problem, we propose a low-overhead aging and process variation aware workload management technique for embedded GPUs. The proposed technique considers the process variation and the current aging status together, and assigns a different number of instructions to clusters to minimize the aging effect in the presence of process variation. Results show that our technique improves the GPU aging in over 95 percent of cases whereas the state-of-the-art compiler-based technique improves the GPU aging in 72.25 percent of cases. Moreover, compared to the compiler-based technique, our technique reduces the performance overhead by 40 percent while achieving almost the same GPU aging improvement.
               
Click one of the above tabs to view related content.