Enhancing Parallelization with OpenMP through Multi-Modal Transformer Learning

被引:0
|
作者
Chen, Yuehua [1 ]
Yuan, Huaqiang [1 ]
Hou, Fengyao [2 ,3 ]
Hu, Peng [2 ,3 ]
机构
[1] Dongguan Univ Technol, Dongguan, Peoples R China
[2] Chinese Acad Sci, Inst High Energy Phys, Beijing, Peoples R China
[3] Spallat Neutron Source Sci Ctr, Dongguan, Peoples R China
来源
2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024 | 2024年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
component; OpenMP; Natural Language Processing; Abstract Syntax Trees; Parallelization;
D O I
10.1109/ICCEA62105.2024.10603704
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The popularity of multicore processors and the rise of High Performance Computing as a Service (HPCaaS) have made parallel programming essential to fully utilize the performance of multicore systems. OpenMP, a widely adopted shared-memory parallel programming model, is favored for its ease of use. However, it is still challenging to assist and accelerate automation of its parallelization. Although existing automation tools such as Cetus and DiscoPoP to simplify the parallelization, there are still limitations when dealing with complex data dependencies and control flows. Inspired by the success of deep learning in the field of Natural Language Processing (NLP), this study adopts a Transformer-based model to tackle the problems of automatic parallelization of OpenMP instructions. We propose a novel Transformer-based multimodal model, ParaMP, to improve the accuracy of OpenMP instruction classification. The ParaMP model not only takes into account the sequential features of the code text, but also incorporates the code structural features and enriches the input features of the model by representing the Abstract Syntax Trees (ASTs) corresponding to the codes in the form of binary trees. In addition, we built a BTCode dataset, which contains a large number of C/C++ code snippets and their corresponding simplified AST representations, to provide a basis for model training. Experimental evaluation shows that our model outperforms other existing automated tools and models in key performance metrics such as F1 score and recall. This study shows a significant improvement on the accuracy of OpenMP instruction classification by combining sequential and structural features of code text, which will provide a valuable insight into deep learning techniques to programming tasks.
引用
收藏
页码:465 / 469
页数:5
相关论文
共 50 条
  • [41] Improving visual grounding with multi-modal interaction and auto-regressive vertex generation
    Qin, Xiaofei
    Li, Fan
    He, Changxiang
    Pei, Ruiqi
    Zhang, Xuedian
    NEUROCOMPUTING, 2024, 598
  • [42] SoftSkip: Empowering Multi-Modal Dynamic Pruning for Single-Stage Referring Comprehension
    Weerakoon, Dulanga
    Subbaraju, Vigneshwaran
    Tran, Tuan
    Misra, Archan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3608 - 3616
  • [43] Revolutionizing Brain Tumor Analysis: A Fusion of ChatGPT and Multi-Modal CNN for Unprecedented Precision
    Rawas, Soha
    Samala, Agariadne Dwinggo
    INTERNATIONAL JOURNAL OF ONLINE AND BIOMEDICAL ENGINEERING, 2024, 20 (08) : 37 - 48
  • [44] MCred: multi-modal message credibility for fake news detection using BERT and CNN
    Verma P.K.
    Agrawal P.
    Madaan V.
    Prodan R.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (08) : 10617 - 10629
  • [45] Adding visual attention into encoder-decoder model for multi-modal machine translation
    Xu, Chun
    Yu, Zhengqing
    Shi, Xiayang
    Chen, Fang
    JOURNAL OF ENGINEERING RESEARCH, 2023, 11 (02):
  • [46] DeepMPF: deep learning framework for predicting drug-target interactions based on multi-modal representation with meta-path semantic analysis
    Ren, Zhong-Hao
    You, Zhu-Hong
    Zou, Quan
    Yu, Chang-Qing
    Ma, Yan-Fang
    Guan, Yong-Jian
    You, Hai-Ru
    Wang, Xin-Fei
    Pan, Jie
    JOURNAL OF TRANSLATIONAL MEDICINE, 2023, 21 (01)
  • [47] Evaluating multi-modal input modes in a Wizard-of-Oz study for the domain of Web search
    Klein, A
    Schwank, I
    Généreux, M
    Trost, H
    PEOPLE AND COMPUTERS XV - INTERACTION WITHOUT FRONTIERS, 2001, : 475 - 483
  • [48] Multi-Modal Social Media Analytics: A Sentiment Perception-Driven Framework in Nanjing Districts
    Xia, Meng
    Lu, Zhengyang
    Wang, Feng
    IEEE ACCESS, 2025, 13 : 12603 - 12622
  • [49] Automated Scoring of Asynchronous Interview Videos Based on Multi-Modal Window-Consistency Fusion
    Lv, Jianming
    Chen, Chujie
    Liang, Zequan
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (03) : 799 - 814
  • [50] EILEEN: A Multi-Modal Framework for Extracting Alcohol Consumption Patterns From Bilingual Clinical Notes
    Kim, Han Kyul
    Park, Yujin
    Kim, Yoon Ji
    Yi, Seungag
    Park, Yeju
    So, Sujin
    Lee, Hyeon-Ji
    Bae, Ye Seul
    IEEE ACCESS, 2025, 13 : 25741 - 25751