Enhancing Parallelization with OpenMP through Multi-Modal Transformer Learning

被引:0
|
作者
Chen, Yuehua [1 ]
Yuan, Huaqiang [1 ]
Hou, Fengyao [2 ,3 ]
Hu, Peng [2 ,3 ]
机构
[1] Dongguan Univ Technol, Dongguan, Peoples R China
[2] Chinese Acad Sci, Inst High Energy Phys, Beijing, Peoples R China
[3] Spallat Neutron Source Sci Ctr, Dongguan, Peoples R China
来源
2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024 | 2024年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
component; OpenMP; Natural Language Processing; Abstract Syntax Trees; Parallelization;
D O I
10.1109/ICCEA62105.2024.10603704
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The popularity of multicore processors and the rise of High Performance Computing as a Service (HPCaaS) have made parallel programming essential to fully utilize the performance of multicore systems. OpenMP, a widely adopted shared-memory parallel programming model, is favored for its ease of use. However, it is still challenging to assist and accelerate automation of its parallelization. Although existing automation tools such as Cetus and DiscoPoP to simplify the parallelization, there are still limitations when dealing with complex data dependencies and control flows. Inspired by the success of deep learning in the field of Natural Language Processing (NLP), this study adopts a Transformer-based model to tackle the problems of automatic parallelization of OpenMP instructions. We propose a novel Transformer-based multimodal model, ParaMP, to improve the accuracy of OpenMP instruction classification. The ParaMP model not only takes into account the sequential features of the code text, but also incorporates the code structural features and enriches the input features of the model by representing the Abstract Syntax Trees (ASTs) corresponding to the codes in the form of binary trees. In addition, we built a BTCode dataset, which contains a large number of C/C++ code snippets and their corresponding simplified AST representations, to provide a basis for model training. Experimental evaluation shows that our model outperforms other existing automated tools and models in key performance metrics such as F1 score and recall. This study shows a significant improvement on the accuracy of OpenMP instruction classification by combining sequential and structural features of code text, which will provide a valuable insight into deep learning techniques to programming tasks.
引用
收藏
页码:465 / 469
页数:5
相关论文
共 50 条
  • [21] MSRD: Multi-Modal Web Rumor Detection Method
    Liu J.
    Feng K.
    Pan J.Z.
    Deng J.
    Wang L.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (11): : 2328 - 2336
  • [22] Template Aging in Multi-Modal Social Behavioral Biometrics
    Tumpa, Sanjida Nasreen
    Gavrilova, Marina L.
    IEEE ACCESS, 2022, 10 : 8487 - 8501
  • [23] A Survey of Multi-modal Question Answering Systems for Robotics
    Liu, Xiaomeng
    Long, Fei
    2017 2ND INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM), 2017, : 189 - 194
  • [24] SpotFake: A Multi-modal Framework for Fake News Detection
    Singhal, Shivangi
    Shah, Rajiv Ratn
    Chakraborty, Tanmoy
    Kumaraguru, Ponnurangam
    Satoh, Shin'ichi
    2019 IEEE FIFTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM 2019), 2019, : 39 - 47
  • [25] Using Multi-modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality
    Yao, Powen
    Hou, Yu
    He, Yuan
    Cheng, Da
    Hu, Huanpu
    Zyda, Michael
    VIRTUAL, AUGMENTED AND MIXED REALITY: DESIGN AND DEVELOPMENT, PT I, 2022, 13317 : 94 - 112
  • [26] Multi-modal Dictionary BERT for Cross-modal Video Search in Baidu Advertising
    Yu, Tan
    Yang, Yi
    Li, Yi
    Liu, Lin
    Sun, Mingming
    Li, Ping
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4341 - 4351
  • [27] Multi-modal Natural Language Processing for Stock Price Prediction
    Taylor, Kevin
    Ng, Jerry
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 4, INTELLISYS 2024, 2024, 1068 : 409 - 419
  • [28] Multi-modal sarcasm detection using ensemble net model
    Sukhavasi, Vidyullatha
    Dondeti, Venkatesulu
    KNOWLEDGE AND INFORMATION SYSTEMS, 2025, 67 (01) : 403 - 425
  • [29] Multi-modal Language Models for Human-Robot Interaction
    Janssens, Ruben
    COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 109 - 111
  • [30] On the Effectiveness of Images in Multi-modal Text Classification: An Annotation Study
    Ma, Chunpeng
    Shen, Aili
    Yoshikawa, Hiyori
    Iwakura, Tomoya
    Beck, Daniel
    Baldwin, Timothy
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2023, 22 (03)