A unified active learning framework for annotating graph data for regression task

被引:1
作者
Samoaa, Peter [1 ]
Aronsson, Linus [1 ]
Longa, Antonio [2 ]
Leitner, Philipp [3 ]
Chehreghani, Morteza Haghir [1 ]
机构
[1] Chalmers Univ Technol, Data Sci & AI, Gothenburg, Sweden
[2] Univ Trento, Trento, Italy
[3] Chalmers Univ Technol, Interact Design & Software Engn, Gothenburg, Sweden
基金
瑞典研究理事会;
关键词
Graph neural networks (GNNs); Active learning; Graphs-level regression; NETWORKS;
D O I
10.1016/j.engappai.2024.109383
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In many domains, effectively applying machine learning models requires a large number of annotations and labelled data, which might not be available in advance. Acquiring annotations often requires significant time, effort, and computational resources, making it challenging. Active learning strategies are pivotal in addressing these challenges, particularly for diverse data types such as graphs. Although active learning has been extensively explored for node-level classification, its application to graph-level learning, especially for regression tasks, is not well-explored. We develop a unified active learning framework specializing in graph annotating and graph-level learning for regression tasks on both standard and expanded graphs, which are more detailed representations. We begin with graph collection and construction. Then, we construct various graph embeddings (unsupervised and supervised) into a latent space. Given such an embedding, the framework becomes task agnostic and active learning can be performed using any regression method and query strategy suited for regression. Within this framework, we investigate the impact of using different levels of information for active and passive learning, e.g., partially available labels and unlabelled test data. Despite our framework being domain agnostic, we validate it on a real-world application of software performance prediction, where the execution time of the source code is predicted. Thus, the graph is constructed as an intermediate source code representation. We support our methodology with a real-world dataset to underscore the applicability of our approach. Our real-world experiments reveal that satisfactory performance can be achieved by querying labels for only a small subset of all the data. A key finding is that Graph2Vec (an unsupervised embedding approach for graph data) performs the best, but only when all train and test features are used. However, Graph Neural Networks (GNNs) are the most flexible embedding techniques when used for different levels of information with and without label access. In addition, we find that the benefit of active learning increases for larger datasets (more graphs) and when the graphs are more complex, which is arguably when active learning is the most important.
引用
收藏
页数:25
相关论文
共 50 条
[21]   Online Active Learning Ensemble Framework for Drifted Data Streams [J].
Shan, Jicheng ;
Zhang, Hang ;
Liu, Weike ;
Liu, Qingbao .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (02) :486-498
[22]   Lill-DATA - A Framework for Traceable Active Learning Projects [J].
Stieler, Fabian ;
Elia, Miriam ;
Weigell, Benjamin ;
Bauer, Bernhard ;
Kienle, Peter ;
Roth, Anton ;
Muellegger, Gregor ;
Nann, Marius ;
Dopfer, Sarah .
2023 IEEE 31ST INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE WORKSHOPS, REW, 2023, :465-474
[23]   Annotating Documents using Active Learning Methods for a Maintenance Analysis Application [J].
Pope, James ;
Terwilliger, Mark ;
Connell, J. A. ;
Talley, Gabriel ;
Blozik, Nicholas ;
Taylor, David .
AIPR 2020: 2020 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND PATTERN RECOGNITION, 2020, :35-41
[24]   SMARTQUERY: An Active Learning Framework for Graph Neural Networks through Hybrid Uncertainty Reduction [J].
Li, Xiaoting ;
Wu, Yuhang ;
Rakesh, Vineeth ;
Lin, Yusan ;
Yang, Hao ;
Wang, Fei .
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, :4199-4203
[25]   Weighted sparse simplex representation: a unified framework for subspace clustering, constrained clustering, and active learning [J].
Hankui Peng ;
Nicos G. Pavlidis .
Data Mining and Knowledge Discovery, 2022, 36 :958-986
[26]   Weighted sparse simplex representation: a unified framework for subspace clustering, constrained clustering, and active learning [J].
Peng, Hankui ;
Pavlidis, Nicos G. .
DATA MINING AND KNOWLEDGE DISCOVERY, 2022, 36 (03) :958-986
[27]   Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning Strategies are not Better than Random Selection [J].
Althammer, Sophia ;
Zuccon, Guido ;
Hofstaetter, Sebastian ;
Verberne, Suzan ;
Hanbury, Allan .
ANNUAL INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL IN THE ASIA PACIFIC REGION, SIGIR-AP 2023, 2023, :139-149
[28]   Active learning of data-assimilation closures using graph neural networks [J].
Quattromini, Michele ;
Bucci, Michele Alessandro ;
Cherubini, Stefania ;
Semeraro, Onofrio .
THEORETICAL AND COMPUTATIONAL FLUID DYNAMICS, 2025, 39 (01)
[29]   Learning to Sample: an Active Learning Framework [J].
Shao, Jingyu ;
Wang, Qing ;
Liu, Fangbing .
2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, :538-547
[30]   Properties of a GP Active Learning Framework for Streaming Data with Class Imbalance [J].
Khanchi, Sara ;
Heywood, Malcolm I. ;
Zincir-Heywood, A. Nur .
PROCEEDINGS OF THE 2017 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'17), 2017, :945-952