Multi-view anomaly detection via hybrid instance-neighborhood aligning and cross-view reasoning

被引:0
|
作者
Tian, Luo [1 ,2 ]
Peng, Shu-Juan [1 ,2 ]
Liu, Xin [1 ,2 ]
Chen, Yewang [2 ]
Cao, Jianjia [3 ,4 ]
机构
[1] Huaqiao Univ, Dept Comp Sci, 668 Jimei St, Xiamen 361021, Fujian, Peoples R China
[2] Xiamen Key Lab Comp Vis & Pattern Recognit, 668 Jimei St, Xiamen 361021, Fujian, Peoples R China
[3] Xiamen Wangsu Ltd Co, Artificial Intelligence Res Inst, 64 Chengyi North St, Xiamen 361000, Fujian, Peoples R China
[4] Huaqiao Univ, Fujian Key Lab Big Data Intelligence & Secur, Xiamen 361021, Peoples R China
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Multi-view learning; Anomaly detection; Cross-view reasoning; Inter-view dependency and discrepancy; Instance-neighborhood aligning;
D O I
10.1007/s00530-024-01526-2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-view anomaly detection aims to identify anomalous instances whose patterns are disparate across different views, and existing works usually project the multi-view data into a common subspace for abnormal instance identification. Nevertheless, these methods often fail to explicitly excavate the inter-view dependency and discrepancy among the multi-view data, which are of crucial importance to detect inconsistent patterns across different views interactively. To address this problem, we propose an efficient multi-view anomaly detection method via instance-neighborhood aligning and cross-view reasoning, which can well parse the inter-view dependency and discrepancy to detect various kinds of anomalous multi-view instances. To be specific, we first utilize the view-specific encoder to project the original data into the latent feature space, in which a novel instance-neighborhood aligning scheme is seamlessly embedded to preserve the consistent neighborhood structures of multiple views and maximize the consistency for the semantically relevant instances, which indirectly enhances the inter-view dependencies. Meanwhile, a cross-view reasoning module is efficiently designed to explore the inter-view dependencies and discrepancies, which can explicitly boost the inter-view correlations and differences to reason the inconsistent view patterns. Through the joint exploitation of the view-specific reconstruction loss, instance-neighborhood aligning loss, and cross-view reasoning loss, different kinds of anomalous multi-view instances can be well detected more reliably. Extensive experiments evaluated on benchmark datasets, quantitatively and qualitatively, verify the advantages of the proposed multi-view anomaly detection framework and show its substantial improvements over the state of the arts. The code is available at: https://github.com/tl-git320/INA-CR.
引用
收藏
页数:14
相关论文
共 35 条
  • [31] Debunking Free Fusion Myth: Online Multi-view Anomaly Detection with Disentangled Product-of-Experts Modeling
    Wang, Hao
    Cheng, Zhi-Qi
    Sun, Jingdong
    Yang, Xin
    Wu, Xiao
    Chen, Hongyang
    Yang, Yan
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3277 - 3286
  • [32] Multi-View Non-negative Matrix Factorization Discriminant Learning via Cross Entropy Loss
    Liu, Jian-Wei
    Wang, Yuan-Fang
    Lu, Run-Kun
    Luo, Xiong-Lin
    PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 3964 - 3971
  • [33] MOL: Towards accurate weakly supervised remote sensing object detection via Multi-view nOisy Learning
    Wang, Guanchun
    Zhang, Xiangrong
    Peng, Zelin
    Jia, Xiuping
    Tang, Xu
    Jiao, Licheng
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2023, 196 : 457 - 470
  • [34] An automatic multi-view disease detection system via Collective Deep Region-based Feature Representation
    Zhou, Jianhang
    Zhang, Qi
    Zhang, Bob
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 115 (115): : 59 - 75
  • [35] Mix before Align: Towards Zero-shot Cross-lingual Sentiment Analysis via Soft-mix and Multi-view Learning
    Zhu, Zhihong
    Cheng, Xuxin
    Chen, Dongsheng
    Huang, Zhiqi
    Li, Hongxiang
    Zou, Yuexian
    INTERSPEECH 2023, 2023, : 3969 - 3973