Adversarial Examples in Physical World

被引:0
作者
Wang, Jiakai [1 ]
机构
[1] Beihang Univ, Beijing, Peoples R China
来源
PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021 | 2021年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
ready made fairly high achievements and a very wide range of impact, their vulnerability attracts lots of interest of researchers towards related studies about artificial intelligence (AI) safety and robustness this year. A series of works reveals that the current DNNs are always misled by elaborately designed adversarial examples. And unfortunately, this peculiarity also affects real-world AI applications and places them at potential risk. we are more interested in physical attacks due to their implementability in the real world. The study of physical attacks can effectively promote the application of AI techniques, which is of great significance to the security development of AI.
引用
收藏
页码:4925 / 4926
页数:2
相关论文
共 2 条
[1]   Bias-Based Universal Adversarial Patch Attack for Automatic Check-Out [J].
Liu, Aishan ;
Wang, Jiakai ;
Liu, Xianglong ;
Cao, Bowen ;
Zhang, Chongzhi ;
Yu, Hang .
COMPUTER VISION - ECCV 2020, PT XIII, 2020, 12358 :395-410
[2]  
Wang Jiakai, 2021, DUAL ATTENTION SUPPR, P1