LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Exploring Fine-Grained In-Memory Database Performance for Modern CPUs

Photo by kellysikkema from unsplash

Modern CPUs keep integrating more cores and large size cache, which is beneficial for in-memory databases to improve parallel processing power and cache locality. While state-of-the-art CPUs have diverse architectures… Click to show full abstract

Modern CPUs keep integrating more cores and large size cache, which is beneficial for in-memory databases to improve parallel processing power and cache locality. While state-of-the-art CPUs have diverse architectures and roadmaps such as large core count and large cache size (AMD x86), moderate core count and cache size (intel x86), large core count and moderate cache size (ARM), exploring in-memory databases performance characteristics for different CPU architectures is important for in-memory database designs and optimizations. In this article, we develop a fine-grained in-memory database benchmark to evaluate the performance of each operator on different CPUs to explore how CPU hardware architectures influence performance. Different from well known conclusions that more cores and larger cache size can achieve higher performance, we find out that the micro cache architectures play an important role opposite to core count and cache size, the shared monolithic L3 cache with moderate size beats large disaggregated L3 cache. The experiments also show that predicting operator performance on different CPUs is difficult according to diverse CPU architectures and micro cache architectures, and different implementations of each operator are not always high or low with interleaved strong and weak performance regions influenced by CPU hardware architectures. Intel x86 CPUs represent cache-centric processor design, while AMD x86 and ARM CPUs represent computing-centric processor design, the OLAP benchmark experiments of SSB discover that OmniSciDB and OLAP Accelerator with vector-wise processing model performs well on intel x86 CPUs compared to AMD x86 CPUs and the JIT compliant based Hyper prefers to AMD x86 CPUs rather than intel x86 CPUs. The CPU roadmaps of increasing cores or improving cache locality should be considered for in-memory database algorithm design and platform selection.

Keywords: x86; cpus; memory; size; cache; performance

Journal Title: IEEE Transactions on Parallel and Distributed Systems
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.