A generic LSTM neural network architecture to infer heterogeneous model transformations

被引:21
作者
Burgueno, Loli [1 ,2 ]
Cabot, Jordi [1 ,3 ]
Li, Shuai [2 ]
Gerard, Sebastien [2 ]
机构
[1] Open Univ Catalonia, IN3, Barcelona, Spain
[2] Univ Paris Saclay, CEA, Inst LIST, Gif Sur Yvette, France
[3] ICREA, Barcelona, Spain
关键词
Model manipulation; Code generation; Model transformation; Artificial intelligence; Machine learning; Neural networks;
D O I
10.1007/s10270-021-00893-y
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Models capture relevant properties of systems. During the models' life-cycle, they are subjected to manipulations with different goals such as managing software evolution, performing analysis, increasing developers' productivity, and reducing human errors. Typically, these manipulation operations are implemented as model transformations. Examples of these transformations are (i) model-to-model transformations for model evolution, model refactoring, model merging, model migration, model refinement, etc., (ii) model-to-text transformations for code generation and (iii) text-to-model ones for reverse engineering. These operations are usually manually implemented, using general-purpose languages such as Java, or domain-specific languages (DSLs) such as ATL or Acceleo. Even when using such DSLs, transformations are still time-consuming and error-prone. We propose using the advances in artificial intelligence techniques to learn these manipulation operations on models and automate the process, freeing the developer from building specific pieces of code. In particular, our proposal is a generic neural network architecture suitable for heterogeneous model transformations. Our architecture comprises an encoder-decoder long short-term memory with an attention mechanism. It is fed with pairs of input-output examples and, once trained, given an input, automatically produces the expected output. We present the architecture and illustrate the feasibility and potential of our approach through its application in two main operations on models: model-to-model transformations and code generation. The results confirm that neural networks are able to faithfully learn how to perform these tasks as long as enough data are provided and no contradictory examples are given.
引用
收藏
页码:139 / 156
页数:18
相关论文
共 70 条
[1]   Learning Natural Coding Conventions [J].
Allamanis, Miltiadis ;
Barr, Earl T. ;
Bird, Christian ;
Sutton, Charles .
22ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (FSE 2014), 2014, :281-293
[2]  
AtlanMod (Inria), CLASS REL TRANSF EX
[3]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
[4]   Multi-Step Learning and Adaptive Search for Learning Complex Model Transformations from Examples [J].
Baki, Islem ;
Sahraoui, Houari .
ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2016, 25 (03)
[5]   Model transformation by example using inductive logic programming [J].
Balogh, Zoltan ;
Varro, Daniel .
SOFTWARE AND SYSTEMS MODELING, 2009, 8 (03) :347-364
[6]   Personalized and automatic model repairing using reinforcement learning [J].
Barriga, Angela ;
Rutle, Adrian ;
Heldal, Rogardt .
2019 ACM/IEEE 22ND INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION (MODELS-C 2019), 2019, :175-181
[7]  
Bernstein P.A., 2007, SIGMOD 07, P1
[8]  
Bernstein PA, 2011, PROC VLDB ENDOW, V4, P695
[9]  
Bowles C., ARXIV181010863
[10]  
Bruneliere Hugo., 2010, P IEEEACM INT C AUTO, P173, DOI DOI 10.1145/1858996.1859032