Towards Enhanced I/O Performance of NVM File Systems

被引:0
|
作者
Bang, Jiwoo [1 ]
Kim, Chungyong [1 ]
Byun, Eun-Kyu [2 ]
Sung, Hanul [3 ]
Lee, Jaehwan [4 ]
Eom, Hyeonsang [1 ]
机构
[1] Seoul Natl Univ, Dept Comp Sci & Engn, Seoul, South Korea
[2] Korea Inst Sci & Technol Informat, Div Natl Supercomp, Daejeon, South Korea
[3] Sangmyung Univ, Dept Game Design & Dev, Seoul, South Korea
[4] Korea Aerosp Univ, Dept Comp Engn, Goyang, South Korea
基金
新加坡国家研究基金会;
关键词
Persistent Memory; Non-volatile Memory; Direct Access; NVM File System; I/O Performance;
D O I
10.1109/HiPC58850.2023.00053
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Non-volatile memory (NVM) provides bulk storage capacity, like NAND flash, while providing low latency, like DRAM, at the same time. NVM enables high-performance, reliable, and cost-effective high performance systems by providing low-latency data access and high capacity storage compared to traditional disk-based system. As NVM becomes a novel tier in the memory hierarchy, efficiently utilizing NVM I/O capability is important. In this work, we evaluate the I/O performance of NVM in three aspects: the performance change with a varying number of concurrent accesses, the performance difference between remote and local accesses, and the performance change with various access granularity. We also compare the performance of NVM file systems that handle the different I/O characteristics of NVM. Specifically, Odinfs is the state-of-the-art NVM file system that solves the performance degradation of NVM with large number of threads and remote NUMA node accesses. We further optimize Odinfs by solving the I/O performance degradation with a small number of threads. We evaluate the optimized version of Odinfs and show that the throughput of Odinfs is increased by 30.91% with four or fewer threads.
引用
收藏
页码:319 / 323
页数:5
相关论文
共 50 条
  • [21] A Cost-efficient NVM-based Journaling Scheme for File Systems
    Zhang, Xiaoyi
    Feng, Dan
    Hua, Yu
    Chen, Jianxi
    2017 IEEE 35TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2017, : 57 - 64
  • [22] Efficient I/O Merging Scheme for Distributed File Systems
    An, Byoung Chul
    Sung, Hanul
    SYMMETRY-BASEL, 2023, 15 (02):
  • [23] Hadoop I/O Performance Improvement by File Layout Optimization
    Fujishima, Eita
    Nakashima, Kenji
    Yamaguchi, Saneyasu
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2018, E101D (02): : 415 - 427
  • [24] ADAM: An Adaptive Directory Accelerating Mechanism for NVM-Based File Systems
    Cui, Xin
    Huang, Linpeng
    Zheng, Shengan
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2018, PT I, 2018, 11334 : 578 - 592
  • [25] Adaptive Prefetching for Accelerating Read and Write in NVM-based File Systems
    Zheng, Shengan
    Mei, Hong
    Huang, Linpeng
    Shen, Yanyan
    Zhu, Yanmin
    2017 IEEE 35TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2017, : 49 - 56
  • [26] MulConn: User-Transparent I/O Subsystem for High-Performance Parallel File Systems
    Kim, Hwajung
    Bang, Jiwoo
    Sung, Dong Kyu
    Eom, Hyeonsang
    Yeom, Heon Y.
    Sung, Hanul
    2021 IEEE 28TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS (HIPC 2021), 2021, : 53 - 62
  • [27] Towards an Understanding of the Performance of MPI-IO in Lustre File Systems
    Logan, Jeremy
    Dickens, Phillip
    2008 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING, 2008, : 330 - 335
  • [28] Performance Optimization of Small File I/O with Adaptive Migration Strategy in Cluster File System
    Li, Xiuqiao
    Dong, Bin
    Xiao, Limin
    Ruan, Li
    HIGH PERFORMANCE COMPUTING AND APPLICATIONS, 2010, 5938 : 242 - 249
  • [29] Towards minimizing disk I/O contention: A partitioned file assignment approach
    Dong, Bin
    Li, Xiuqiao
    Xiao, Limin
    Ruan, Li
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2014, 37 : 178 - 190
  • [30] Phoenix: Memory Speed HPC I/O with NVM
    Fernando, Pradeep
    Kannan, Sudarsun
    Gavrilovska, Ada
    Schwan, Karsten
    PROCEEDINGS OF 2016 IEEE 23RD INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING (HIPC), 2016, : 121 - 131