Wear-leveling-aware buddy-like memory allocator for persistent memory file systems

被引:0
作者
Yu, Zhiwang [1 ]
Yang, Chaoshu [1 ]
Zhang, Runyu [1 ]
Tian, Pengpeng [1 ]
He, Xianyu [1 ]
Zhou, Lening [1 ]
Li, Hui [1 ]
Liu, Duo [2 ]
机构
[1] Guizhou Univ, Coll Comp Sci & Technol, State Key Lab Publ Big Data, Guiyang, Peoples R China
[2] Chongqing Univ, Sch Big Data & Software Engn, Chongqing, Peoples R China
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2024年 / 150卷
基金
中国国家自然科学基金;
关键词
File system; Persistent memory; Wear-leveling; Memory management; Buddy allocator; Multi-grained allocator;
D O I
10.1016/j.future.2023.08.013
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Existing persistent memory file systems usually ignore the problem that persistent memories (PMs) have limited write endurance. Then the underlying PMs can be damaged easily by the unbalanced writes of file systems. However, existing wear-leveling-aware space management techniques mainly focus on providing higher-balanced writes to PMs rather than reducing the overhead, which can lead to serious performance degradation of persistent memory file systems. In this paper, we propose an efficient wear-leveling-aware buddy-like memory allocator, called WBAlloc, to achieve a higher-accuracy wear-leveling of PM while improving the performance of persistent memory file systems. Like the buddy memory allocator, WBAlloc adopts a multi-level allocator to manage the unused space of PM, and each allocator represents a range of allocation granularity, which can achieve O(1) time complexity in both allocation and deallocation. We implement the proposed WBAlloc in the Linux kernel based on NOVA, a typical persistent memory file system. Compared with original NOVA, DWARM, and WMAlloc (DWARM and WMAlloc are the state-of-the-art wear-leveling-aware allocators of persistent memory file systems), the experimental results show that the proposed WBAlloc can achieve 26.23%, 80.46%, 15.61% performance improvement while reducing the maximum writes by up to 338.88%, 159.28%, and 29.45% on average, respectively.& COPY; 2023 Elsevier B.V. All rights reserved.
引用
收藏
页码:37 / 48
页数:12
相关论文
共 54 条
[11]  
Condit J, 2009, SOSP'09: PROCEEDINGS OF THE TWENTY-SECOND ACM SIGOPS SYMPOSIUM ON OPERATING SYSTEMS PRINCIPLES, P133
[12]   Beyond Write-Reduction Consideration: A Wear-Leveling-Enabled B+-Tree Indexing Scheme Over an NVRAM-Based Architecture [J].
Dharamjeet ;
Chen, Tseng-Yi ;
Chang, Yuan-Hao ;
Wu, Chun-Feng ;
Lee, Chi-Heng ;
Shih, Wei-Kuan .
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (12) :2455-2466
[13]  
Dulloor S. R., 2014, P 9 EUROPEAN C COMPU
[14]   Low-Energy Write Operation for 1T-1MTJ STT-RAM Bitcells With Negative Bitline Technique [J].
Farkhani, Hooman ;
Peiravi, Ali ;
Moradi, Farshad .
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2016, 24 (04) :1593-1597
[15]  
Gogte V, 2019, PROCEEDINGS OF THE 17TH USENIX CONFERENCE ON FILE AND STORAGE TECHNOLOGIES, P45
[16]   Software-Managed Read and Write Wear-Leveling for Non-Volatile Main Memory [J].
Hakert, Christian ;
Chen, Kuan-Hsun ;
Schirmeier, Horst ;
Bauer, Lars ;
Genssler, Paul R. ;
von Der Bruggen, Georg ;
Amrouch, Hussam ;
Henkel, Jorg ;
Chen, Jian-Jia .
ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (01)
[17]  
Hakert C, 2020, ASIA S PACIF DES AUT, P651, DOI 10.1109/ASP-DAC47756.2020.9045418
[18]   Alternative Encoding: A Two-Step Transition Reduction Scheme for MLC STT-RAM Cache [J].
Hsieh, Jen-Wei ;
Hou, Yueh-Ting ;
Chang, Tai-Chieh .
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (08) :2753-2757
[19]   TSE: Two-Step Elimination for MLC STT-RAM Last-Level Cache [J].
Hsieh, Jen-Wei ;
Liu, Yi-Yu ;
Lee, Hung-Tse ;
Chang, Tai .
IEEE TRANSACTIONS ON COMPUTERS, 2021, 70 (09) :1498-1510
[20]   Security RBSG: Protecting Phase Change Memory with Security-Level Adjustable Dynamic Mapping [J].
Huang, Fangting ;
Feng, Dan ;
Xia, Wen ;
Zhou, Wen ;
Zhang, Yucheng ;
Fu, Min ;
Jiang, Chuntao ;
Zhou, Yukun .
2016 IEEE 30TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2016), 2016, :1081-1090