The advent of byte-addressable persistent memory opens an important opportunity for document databases to read and write durable data fetching them into DRAM. Reaping the benefit of persistent memory is… Click to show full abstract
The advent of byte-addressable persistent memory opens an important opportunity for document databases to read and write durable data fetching them into DRAM. Reaping the benefit of persistent memory is not straightforward, as existing document databases are tailored for disk storage. They assume that the disk and DRAM data movement dominates the performance. However, this paper points out that data indexing becomes the performance bottleneck when porting document databases to persistent memory. The paper proposes PMLiteDB, the first persistent memory document database with streamlined access paths. PMLiteDB introduces two techniques, direct reading and selective caching. Direct reading streamlines the translation from document IDs to the address of documents whenever possible by swizzling the IDs into persistent memory references. It guarantees to use only up-to-date persistent memory references when document movements invalidate associated references. Selective caching reduces data movements between DRAM and persistent memory by selectively caching only frequently accessed persistent memory data pages with a DRAM buffer. For other pages, the database loads data on them directly without caching. Compared to the design that adopts persistent memory as a fast disk without exploiting the byte-addressability, PMLiteDB achieves 2.33× on average and up to 6.18× speedup.
               
Click one of the above tabs to view related content.