FliT: A Library for Simple and Efficient Persistent Algorithms

被引:9
|
作者
Wei, Yuanhao [1 ]
Ben-David, Naama [2 ]
Friedman, Michal [3 ]
Blelloch, Guy E. [1 ]
Petrank, Erez [3 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA USA
[2] VMware Res, Palo Alto, CA USA
[3] Techn, Haifa, Israel
来源
PPOPP'22: PROCEEDINGS OF THE 27TH ACM SIGPLAN SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING | 2022年
基金
以色列科学基金会; 美国国家科学基金会;
关键词
Non-volatile Memory; Concurrent Data Structures; Recoverability;
D O I
10.1145/3503221.3508436
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Non-volatile random access memory (NVRAM) offers byte-addressable persistence at speeds comparable to DRAM. However, with caches remaining volatile, automatic cache evictions can reorder updates to memory, potentially leaving persistent memory in an inconsistent state upon a system crash. Flush and fence instructions can be used to force ordering among updates, but are expensive. This has motivated significant work studying how to write correct and efficient persistent programs for NVRAM. In this paper, we present FliT, a C++ library that facilitates writing efficient persistent code. Using the library's default mode makes any linearizable data structure durable with minimal changes to the code. FliT avoids many redundant flush instructions by using a novel algorithm to track dirty cache lines. It also allows for extra optimizations, but achieves good performance even in its default setting. To describe the FliT library's capabilities and guarantees, we define a persistent programming interface, called the P-V Interface, which FliT implements. The P-V Interface captures the expected behavior of code in which some instructions' effects are persisted and some are not. We show that the interface captures the desired semantics of many practical algorithms in the literature. We apply the FliT library to four different persistent data structures, and show that across several workloads, persistence implementations, and data structure sizes, the FliT library always improves operation throughput, by at least 2.1x over a naive implementation in all but one workload.
引用
收藏
页码:309 / 321
页数:13
相关论文
共 50 条