Black-Box Attacks on Graph Neural Networks via White-Box Methods With Performance Guarantees
被引:1
|
作者:
Yang, Jielong
论文数: 0引用数: 0
h-index: 0
机构:
Jiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R ChinaJiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R China
Yang, Jielong
[1
]
Ding, Rui
论文数: 0引用数: 0
h-index: 0
机构:
Jiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R ChinaJiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R China
Ding, Rui
[1
]
Chen, Jianyu
论文数: 0引用数: 0
h-index: 0
机构:
Beihang Univ, Inst Artificial Intelligence, Beijing 100191, Peoples R ChinaJiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R China
Chen, Jianyu
[2
]
Zhong, Xionghu
论文数: 0引用数: 0
h-index: 0
机构:
Hunan Univ, Sch Comp Sci & Technol, Changsha 410082, Peoples R ChinaJiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R China
Zhong, Xionghu
[3
]
Zhao, Huarong
论文数: 0引用数: 0
h-index: 0
机构:
Jiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R ChinaJiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R China
Zhao, Huarong
[1
]
Xie, Linbo
论文数: 0引用数: 0
h-index: 0
机构:
Jiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R ChinaJiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R China
Xie, Linbo
[1
]
机构:
[1] Jiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R China
[2] Beihang Univ, Inst Artificial Intelligence, Beijing 100191, Peoples R China
[3] Hunan Univ, Sch Comp Sci & Technol, Changsha 410082, Peoples R China
来源:
IEEE INTERNET OF THINGS JOURNAL
|
2024年
/
11卷
/
10期
Graph adversarial attacks can be classified as either white-box or black-box attacks. White-box attackers typically exhibit better performance because they can exploit the known structure of victim models. However, in practical settings, most attackers generate perturbations under black-box conditions, where the victim model is unknown. A fundamental question is how to leverage a white-box attacker to attack a black-box model. Some current black-box attack approaches employ white-box techniques to attack a surrogate model, resulting in satisfactory outcomes. Nonetheless, such white-box attackers must be meticulously designed and lack theoretical assurances for attack effectiveness. In this article, we propose a novel framework that utilizes simple white-box techniques to conduct black-box attacks and provides the lower bound for attack performance. Specifically, we first employ a more comprehensive GCN technique named BiasGCN to approximate the victim model, and subsequently, use a simple white-box approach to attack the approximate model. We provide a generalization guarantee for our BiasGCN and employ it to obtain the lower bound on attack performance. Our method is evaluated on various data sets, and the experimental results indicate that our approach surpasses recently proposed baselines.