A generic LSTM neural network architecture to infer heterogeneous model transformations

被引:21
作者
Burgueno, Loli [1 ,2 ]
Cabot, Jordi [1 ,3 ]
Li, Shuai [2 ]
Gerard, Sebastien [2 ]
机构
[1] Open Univ Catalonia, IN3, Barcelona, Spain
[2] Univ Paris Saclay, CEA, Inst LIST, Gif Sur Yvette, France
[3] ICREA, Barcelona, Spain
关键词
Model manipulation; Code generation; Model transformation; Artificial intelligence; Machine learning; Neural networks;
D O I
10.1007/s10270-021-00893-y
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Models capture relevant properties of systems. During the models' life-cycle, they are subjected to manipulations with different goals such as managing software evolution, performing analysis, increasing developers' productivity, and reducing human errors. Typically, these manipulation operations are implemented as model transformations. Examples of these transformations are (i) model-to-model transformations for model evolution, model refactoring, model merging, model migration, model refinement, etc., (ii) model-to-text transformations for code generation and (iii) text-to-model ones for reverse engineering. These operations are usually manually implemented, using general-purpose languages such as Java, or domain-specific languages (DSLs) such as ATL or Acceleo. Even when using such DSLs, transformations are still time-consuming and error-prone. We propose using the advances in artificial intelligence techniques to learn these manipulation operations on models and automate the process, freeing the developer from building specific pieces of code. In particular, our proposal is a generic neural network architecture suitable for heterogeneous model transformations. Our architecture comprises an encoder-decoder long short-term memory with an attention mechanism. It is fed with pairs of input-output examples and, once trained, given an input, automatically produces the expected output. We present the architecture and illustrate the feasibility and potential of our approach through its application in two main operations on models: model-to-model transformations and code generation. The results confirm that neural networks are able to faithfully learn how to perform these tasks as long as enough data are provided and no contradictory examples are given.
引用
收藏
页码:139 / 156
页数:18
相关论文
共 70 条
[61]  
Vaswani A, 2017, ADV NEUR IN, V30
[62]  
Vidanage, 2016, P 16 INT C ADV ICT E
[63]  
Vogel-Heuser, 2014, P 18 INT SOFTW PROD, V2
[64]  
Wimmer Manuel., 2007, System Sciences, p285b, DOI DOI 10.1109/HICSS.2007.572
[65]  
Wohlin C., 2012, Experimentation in Software Engineering, VVolume 236, DOI 10.1007/978-3-642-29044-2
[66]  
Wolf T, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING: SYSTEM DEMONSTRATIONS, P38
[67]  
Wu Yonghui, 2016, Tech. Rep.
[68]  
Yanqing Wang, 2008, Journal of Software Engineering and Applications, V1, P88, DOI 10.4236/jsea.2008.11013
[69]  
Yassipour-Tehrani S., 2020, P 23 ACM IEEE INT C
[70]  
Zhang X., 2011, 5 INT C SIGNAL PROCE