Fast POI anomaly detection using a weakly-supervised temporal state regression network

被引:0
作者
Yao, Xin [1 ]
机构
[1] Alibaba Grp, Beijing 100102, Peoples R China
来源
COMPUTATIONAL URBAN SCIENCE | 2024年 / 4卷 / 01期
关键词
POI; Anomaly detection; Human activity; Time series; Weakly-supervised learning; SERIES; REPRESENTATIONS;
D O I
10.1007/s43762-024-00151-z
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Point-of-interest (POI) is a fundamental data type of maps. Anomalous POIs would make maps outdated and lead to user-unfriendly location-based services, and thus should be discovered as fast as possible. Traditional POI anomaly detection methods are inefficient owing to high investigation costs. The emergence of massive human activity data provides a new insight into monitoring POI states through time series modeling. When a POI turns into an anomaly, the associated human activity would disappear. However, human activity data have complicated temporal patterns and noises. It is challenging for existing time series methods to model human activity dynamics. More importantly, there is a lag between the time a POI becomes anomalous and the time we discover it. In this research, we develop a temporal state regression network (TSRNet) model for fast POI anomaly detection. The model can extract temporal features in human activity data, and predict POI state scores as anomaly indicators. Meanwhile, an inference approach is proposed to generate state score sequences as inexact labels for model training. Such weak labels enable TSRNet to identify abnormal temporal patterns as soon as they appear, so that POI outliers can be detected at an early time. Experiments on real-word datasets from AMAP validate the feasibility of our method.
引用
收藏
页数:13
相关论文
共 50 条
[21]   A weakly supervised anomaly detection method based on deep anomaly scoring network [J].
Xin Xie ;
Zixi Li ;
Yuhui Huang ;
Dengquan Wu .
Signal, Image and Video Processing, 2023, 17 :3903-3911
[22]   Context Sensitive Network for weakly-supervised fine-grained temporal action localization [J].
Dong, Cerui ;
Liu, Qinying ;
Wang, Zilei ;
Zhang, Yixin ;
Zhao, Feng .
NEURAL NETWORKS, 2025, 185
[23]   Weakly-Supervised Network for Detection of COVID-19 in Chest CT Scans [J].
Mohammed, Ahmed ;
Wang, Congcong ;
Zhao, Meng ;
Ullah, Mohib ;
Naseem, Rabia ;
Wang, Hao ;
Pedersen, Marius ;
Cheikh, Faouzi Alaya .
IEEE ACCESS, 2020, 8 :155987-156000
[24]   Adaptive Two-Stream Consensus Network for Weakly-Supervised Temporal Action Localization [J].
Zhai, Yuanhao ;
Wang, Le ;
Tang, Wei ;
Zhang, Qilin ;
Zheng, Nanning ;
Doermann, David ;
Yuan, Junsong ;
Hua, Gang .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) :4136-4151
[25]   Weakly-supervised temporal attention 3D network for human action recognition [J].
Kim, Jonghyun ;
Li, Gen ;
Yun, Inyong ;
Jung, Cheolkon ;
Kim, Joongkyu .
PATTERN RECOGNITION, 2021, 119
[26]   Dynamic Graph Modeling for Weakly-Supervised Temporal Action Localization [J].
Shi, Haichao ;
Zhang, Xiao-Yu ;
Li, Changsheng ;
Gong, Lixing ;
Li, Yong ;
Bao, Yongjun .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, :3820-3828
[27]   Weakly Supervised Video Anomaly Detection via Transformer-Enabled Temporal Relation Learning [J].
Zhang, Dasheng ;
Huang, Chao ;
Liu, Chengliang ;
Xu, Yong .
IEEE SIGNAL PROCESSING LETTERS, 2022, 29 :1197-1201
[28]   Vectorized Evidential Learning for Weakly-Supervised Temporal Action Localization [J].
Gao, Junyu ;
Chen, Mengyuan ;
Xu, Changsheng .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (12) :15949-15963
[29]   Dual Masked Modeling for Weakly-Supervised Temporal Boundary Discovery [J].
Ma, Yuer ;
Liu, Yi ;
Wang, Limin ;
Kang, Wenxiong ;
Qiao, Yu ;
Wang, Yali .
IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 :5694-5704
[30]   Weakly-Supervised Temporal Action Localization by Progressive Complementary Learning [J].
Du, Jia-Run ;
Feng, Jia-Chang ;
Lin, Kun-Yu ;
Hong, Fa-Ting ;
Qi, Zhongang ;
Shan, Ying ;
Hu, Jian-Fang ;
Zheng, Wei-Shi .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (01) :938-952