Parallel ILP for distributed-memory architectures

被引:18
|
作者
Fonseca, Nuno A. [1 ,2 ]
Srinivasan, Ashwin [3 ,4 ,5 ]
Silva, Fernando [2 ,6 ]
Camacho, Rui [7 ,8 ]
机构
[1] Univ Porto, IBMC, P-4169007 Oporto, Portugal
[2] Univ Porto, CRACS, P-4169007 Oporto, Portugal
[3] Indian Inst Technol, IBM India Res Lab, New Delhi 110016, India
[4] Univ New S Wales, Dept CSE, Sydney, NSW 2052, Australia
[5] Univ New S Wales, Ctr Hlth Informat, Sydney, NSW 2052, Australia
[6] Univ Porto, Fac Ciencias, P-4169007 Oporto, Portugal
[7] Univ Porto, LIAAD, P-4200465 Oporto, Portugal
[8] Univ Porto, Fac Engn, P-4200465 Oporto, Portugal
关键词
ILP; Parallelism; Efficiency;
D O I
10.1007/s10994-008-5094-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The growth of machine-generated relational databases, both in the sciences and in industry, is rapidly outpacing our ability to extract useful information from them by manual means. This has brought into focus machine learning techniques like Inductive Logic Programming (ILP) that are able to extract human-comprehensible models for complex relational data. The price to pay is that ILP techniques are not efficient: they can be seen as performing a form of discrete optimisation, which is known to be computationally hard; and the complexity is usually some super-linear function of the number of examples. While little can be done to alter the theoretical bounds on the worst-case complexity of ILP systems, some practical gains may follow from the use of multiple processors. In this paper we survey the state-of-the-art on parallel ILP. We implement several parallel algorithms and study their performance using some standard benchmarks. The principal findings of interest are these: (1) of the techniques investigated, one that simply constructs models in parallel on each processor using a subset of data and then combines the models into a single one, yields the best results; and (2) sequential (approximate) ILP algorithms based on randomized searches have lower execution times than (exact) parallel algorithms, without sacrificing the quality of the solutions found.
引用
收藏
页码:257 / 279
页数:23
相关论文
共 50 条
  • [21] LU FACTORIZATION ALGORITHMS ON DISTRIBUTED-MEMORY MULTIPROCESSOR ARCHITECTURES
    GEIST, GA
    ROMINE, CH
    SIAM JOURNAL ON SCIENTIFIC AND STATISTICAL COMPUTING, 1988, 9 (04): : 639 - 649
  • [22] A PARALLEL TRIANGULAR SOLVER FOR A DISTRIBUTED-MEMORY MULTIPROCESSOR
    LI, GG
    COLEMAN, TF
    SIAM JOURNAL ON SCIENTIFIC AND STATISTICAL COMPUTING, 1988, 9 (03): : 485 - 502
  • [23] Scalability and Locality of Extrapolation Methods for Distributed-Memory Architectures
    Korch, Matthias
    Rauber, Thomas
    Scholtes, Carsten
    EURO-PAR 2010 - PARALLEL PROCESSING, PART II, 2010, 6272 : 65 - 76
  • [24] Compiling High Performance Fortran for distributed-memory architectures
    Benkner, S
    Zima, H
    PARALLEL COMPUTING, 1999, 25 (13-14) : 1785 - 1825
  • [25] SYNTHETIC MODELS OF DISTRIBUTED-MEMORY PARALLEL PROGRAMS
    POPLAWSKI, DA
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 1991, 12 (04) : 423 - 426
  • [26] PARALLEL TALBOT ALGORITHM FOR DISTRIBUTED-MEMORY MACHINES
    DEROSA, MA
    GIUNTA, G
    RIZZARDI, M
    PARALLEL COMPUTING, 1995, 21 (05) : 783 - 801
  • [27] Parallel feature selection for distributed-memory clusters
    Gonzalez-Dominguez, Jorge
    Bolon-Canedo, Veronica
    Freire, Borja
    Tourino, Juan
    INFORMATION SCIENCES, 2019, 496 : 399 - 409
  • [28] Numerical integration on distributed-memory parallel systems
    Ciegis, R
    Sablinskas, R
    Wasniewski, J
    RECENT ADVANCES IN PARALLEL VIRTUAL MACHINE AND MESSAGE PASSING INTERFACE, 1997, 1332 : 329 - 336
  • [29] Portable, parallel transformation: Distributed-memory approach
    Covick, LA
    Sando, KM
    JOURNAL OF COMPUTATIONAL CHEMISTRY, 1996, 17 (08) : 992 - 1001
  • [30] STORE COHERENCY IN A PARALLEL DISTRIBUTED-MEMORY MACHINE
    BORRMANN, L
    ISTAVRINOS, P
    LECTURE NOTES IN COMPUTER SCIENCE, 1991, 487 : 32 - 41