Time-Series Classification Based on Fusion Features of Sequence and Visualization

被引:7
|
作者
Wang, Baoquan [1 ,2 ,3 ]
Jiang, Tonghai [1 ,3 ]
Zhou, Xi [1 ,3 ]
Ma, Bo [1 ,2 ,3 ]
Zhao, Fan [1 ,3 ]
Wang, Yi [1 ,3 ]
机构
[1] Chinese Acad Sci, Xinjiang Tech Inst Phys & Chem, Urumqi 830011, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Xinjiang Lab Minor Speech & Language Informat Pro, Urumqi 830011, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2020年 / 10卷 / 12期
关键词
time series data; classification; fusion feature; visualization; area graph; attention; FOREST;
D O I
10.3390/app10124124
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
For the task of time-series data classification (TSC), some methods directly classify raw time-series (TS) data. However, certain sequence features are not evident in the time domain and the human brain can extract visual features based on visualization to classify data. Therefore, some researchers have converted TS data to image data and used image processing methods for TSC. While human perceptionconsists of a combination of human senses from different aspects, existing methods only use sequence features or visualization features. Therefore, this paper proposes a framework for TSC based on fusion features (TSC-FF) of sequence features extracted from raw TS and visualization features extracted from Area Graphs converted from TS. Deep learning methods have been proven to be useful tools for automatically learning features from data; therefore, we use long short-term memory with an attention mechanism (LSTM-A) to learn sequence features and a convolutional neural network with an attention mechanism (CNN-A) for visualization features, in order to imitate the human brain. In addition, we use the simplest visualization method of Area Graph for visualization features extraction, avoiding loss of information and additional computational cost. This article aims to prove that using deep neural networks to learn features from different aspects and fusing them can replace complex, artificially constructed features, as well as remove the bias due to manually designed features, in order to avoid the limitations of domain knowledge. Experiments on several open data sets show that the framework achieves promising results, compared with other methods.
引用
收藏
页数:25
相关论文
共 50 条
  • [1] Multivariate time series classification based on fusion features
    Du, Mingsen
    Wei, Yanxuan
    Hu, Yupeng
    Zheng, Xiangwei
    Ji, Cun
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 248
  • [2] Multi-Channel Fusion Classification Method Based on Time-Series Data
    Jin, Xue-Bo
    Yang, Aiqiang
    Su, Tingli
    Kong, Jian-Lei
    Bai, Yuting
    SENSORS, 2021, 21 (13)
  • [3] RanViz: Ransomware Visualization and Classification Based on Time-Series Categorical Representation of API Calls
    Mokoma, Vhuhwavho
    Singh, Avinash
    IEEE ACCESS, 2025, 13 : 56237 - 56254
  • [4] Features Fusion Framework for Multimodal Irregular Time-series Events
    Tang, Peiwang
    Zhang, Xianchao
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2022, 13629 : 366 - 379
  • [5] Time-series prediction based on pattern classification
    Zeng, Z
    Yan, H
    Fu, AMN
    ARTIFICIAL INTELLIGENCE IN ENGINEERING, 2001, 15 (01): : 61 - 69
  • [6] Time-series Classification Using Neural Bag-of-Features
    Passalis, Nikolaos
    Tsantekidis, Avraam
    Tefas, Anastasios
    Kanniainen, Juho
    Gabbouj, Moncef
    Iosifidis, Alexandros
    2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2017, : 301 - 305
  • [7] Time series visualization based on shape features
    Li, Hailin
    Yang, Libin
    KNOWLEDGE-BASED SYSTEMS, 2013, 41 : 43 - 53
  • [8] DFNet: Decomposition fusion model for long sequence time-series forecasting
    Zhang, Fan
    Guo, Tiantian
    Wang, Hua
    KNOWLEDGE-BASED SYSTEMS, 2023, 277
  • [10] Time-series Visualization of Twitter Trends
    Konishi, Atsuro
    Hosobe, Hiroshi
    IVAPP: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 3: IVAPP, 2020, : 201 - 208