Graphics Processing Units (GPUs) have become the accelerator of choice for data-parallel applications, enabling the execution of thousands of threads in a Single Instruction - Multiple Thread (SIMT) fashion. Using… Click to show full abstract
Graphics Processing Units (GPUs) have become the accelerator of choice for data-parallel applications, enabling the execution of thousands of threads in a Single Instruction - Multiple Thread (SIMT) fashion. Using OpenCL terminology, GPUs offer a global memory space shared by all the threads in the GPU, as well as a local memory space shared by only a subset of the threads. Programmers can use local memory as a scratchpad to improve the performance of their applications due to its lower latency as compared to global memory. In the SIMT execution model, data locking mechanisms used to protect shared data limit scalability. To take full advantage of the lower latency that local memory affords, and to provide an efficient synchronization mechanism, we propose GPU-LocalTM as a lightweight and efficient transactional memory (TM) for GPU local memory. To minimize the storage resources required for TM support, GPU-LocalTM allocates transactional metadata in the existing memory resources. Additionally, GPU-LocalTM implements different conflict detection mechanisms that can be used to match the characteristics of the application. For the workloads studied in our simulation-based evaluation, GPU-LocalTM provides from 1.1X up to 100X speedup over serialized critical sections.
               
Click one of the above tabs to view related content.