Enhancing Parallelization with OpenMP through Multi-Modal Transformer Learning

被引:0
|
作者
Chen, Yuehua [1 ]
Yuan, Huaqiang [1 ]
Hou, Fengyao [2 ,3 ]
Hu, Peng [2 ,3 ]
机构
[1] Dongguan Univ Technol, Dongguan, Peoples R China
[2] Chinese Acad Sci, Inst High Energy Phys, Beijing, Peoples R China
[3] Spallat Neutron Source Sci Ctr, Dongguan, Peoples R China
来源
2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024 | 2024年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
component; OpenMP; Natural Language Processing; Abstract Syntax Trees; Parallelization;
D O I
10.1109/ICCEA62105.2024.10603704
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The popularity of multicore processors and the rise of High Performance Computing as a Service (HPCaaS) have made parallel programming essential to fully utilize the performance of multicore systems. OpenMP, a widely adopted shared-memory parallel programming model, is favored for its ease of use. However, it is still challenging to assist and accelerate automation of its parallelization. Although existing automation tools such as Cetus and DiscoPoP to simplify the parallelization, there are still limitations when dealing with complex data dependencies and control flows. Inspired by the success of deep learning in the field of Natural Language Processing (NLP), this study adopts a Transformer-based model to tackle the problems of automatic parallelization of OpenMP instructions. We propose a novel Transformer-based multimodal model, ParaMP, to improve the accuracy of OpenMP instruction classification. The ParaMP model not only takes into account the sequential features of the code text, but also incorporates the code structural features and enriches the input features of the model by representing the Abstract Syntax Trees (ASTs) corresponding to the codes in the form of binary trees. In addition, we built a BTCode dataset, which contains a large number of C/C++ code snippets and their corresponding simplified AST representations, to provide a basis for model training. Experimental evaluation shows that our model outperforms other existing automated tools and models in key performance metrics such as F1 score and recall. This study shows a significant improvement on the accuracy of OpenMP instruction classification by combining sequential and structural features of code text, which will provide a valuable insight into deep learning techniques to programming tasks.
引用
收藏
页码:465 / 469
页数:5
相关论文
共 50 条
  • [31] Enriching image description generation through multi-modal fusion of VGG16, scene graphs and BiGRU
    Agarwal, Lakshita
    Verma, Bindu
    VISUAL COMPUTER, 2025,
  • [32] TAP with ease: a generic recommendation system for trigger-action programming based on multi-modal representation learning
    Wu, Gang
    Wang, Ming
    Wang, Feng
    APPLIED SOFT COMPUTING, 2024, 166
  • [33] Decoding EEG Brain Activity for Multi-Modal Natural Language Processing
    Hollenstein, Nora
    Renggli, Cedric
    Glaus, Benjamin
    Barrett, Maria
    Troendle, Marius
    Langer, Nicolas
    Zhang, Ce
    FRONTIERS IN HUMAN NEUROSCIENCE, 2021, 15
  • [34] Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering
    Guo, Zihan
    Han, Dezhi
    SENSORS, 2020, 20 (23) : 1 - 15
  • [35] EnCoSum: enhanced semantic features for multi-scale multi-modal source code summarization
    Gao, Yuexiu
    Zhang, Hongyu
    Lyu, Chen
    EMPIRICAL SOFTWARE ENGINEERING, 2023, 28 (05)
  • [36] EnCoSum: enhanced semantic features for multi-scale multi-modal source code summarization
    Yuexiu Gao
    Hongyu Zhang
    Chen Lyu
    Empirical Software Engineering, 2023, 28
  • [37] Hausa Visual Genome: A Dataset for Multi-Modal English to Hausa Machine Translation
    Abdulmumin, Idris
    Dash, Satya Ranjan
    Dawud, Musa Abdullahi
    Parida, Shantipriya
    Muhammad, Shamsuddeen Hassan
    Ahmad, Ibrahim Sa'id
    Panda, Subhadarshi
    Bojar, Ondrej
    Galadanci, Bashir Shehu
    Bello, Shehu Bello
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6471 - 6479
  • [38] Implementation of a Virtual Assistant System Based on Deep Multi-modal Data Integration
    Baek, Sungdae
    Kim, Jonghong
    Lee, Junwon
    Lee, Minho
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2024, 96 (03): : 179 - 189
  • [39] Sentiment Classification Algorithm Based on Multi-Modal Social Media Text Information
    Xuanyuan, Minzheng
    Xiao, Le
    Duan, Mengshi
    IEEE ACCESS, 2021, 9 : 33410 - 33418
  • [40] DeepMPF: deep learning framework for predicting drug–target interactions based on multi-modal representation with meta-path semantic analysis
    Zhong-Hao Ren
    Zhu-Hong You
    Quan Zou
    Chang-Qing Yu
    Yan-Fang Ma
    Yong-Jian Guan
    Hai-Ru You
    Xin-Fei Wang
    Jie Pan
    Journal of Translational Medicine, 21