• 中国计算机学会会刊
  • 中国科技核心期刊
  • 中文核心期刊

Computer Engineering & Science

Previous Articles     Next Articles

Cache management and  memory scheduling
 based on inter-warp heterogeneity

FANG Juan,WEI Zelin,YU Tingwen   

  1. (College of Computer Science,Faculty of Information Science,Beijing University of Technology,Beijing 100124,China)

     
  • Received:2018-10-08 Revised:2018-12-12 Online:2019-05-25

Abstract:

All threads within a warp execute the same instruction in the lockstep in a GPU. Memory requests from some threads are served early while requests from some other threads have to experience long time latency. Warp cannot execute the next instruction before the last request is served, which can cause memory divergence. We study the inter-warp heterogeneity in GPU, implement and optimize a cache management mechanism and a memory scheduling policy based on interwarp heterogeneity, which can reduce the negative impact of memory divergence and cache queuing latency. Warps are classified according to the hit rate of L2 cache to drive the following three components: (1) A warp-type based cache bypassing mechanism  to bypass the L2 cache for warps with low cache utilization; (2) A warptype based cache insert/improvement policy to prevent the data  from warps with high cache utilization being cleared prematurely; and (3) A warp-type based memory scheduler to prioritize requests received from warps with high cache utilization  and the requests from the same warp. Compared with the baseline GPU, the cache management mechanism and the memory scheduling policy based on inter-warp heterogeneity can speed up 8 different GPGPU applications by 18.0% on average.

Key words: cache management, memory scheduling, memory divergence, warp