Two-stage stacked autoencoder monitoring model based on deep slow feature representation for dynamic processes

被引:0
|
作者
Li, Qing [1 ]
Wan, Jiaqi [1 ]
Yang, Xu [1 ]
Huang, Jian [1 ]
Cui, Jiarui [1 ]
Yan, Qun [1 ]
机构
[1] Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Key Lab Knowledge Automat Ind Proc, Minist Educ, Beijing 100083, Peoples R China
基金
中国国家自然科学基金;
关键词
Process monitoring; Deep slow feature representation; Two-stage stacked autoencoder; Vinyl acetate monomer process; FAULT-DETECTION; NETWORK;
D O I
10.1016/j.jprocont.2025.103389
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The slow feature analysis (SFA) method constitutes a robust technique for dynamic process monitoring, capable of extracting slow-varying features to reveal process dynamics. A significant challenge in SFA-based monitoring involves nonlinear relationships within process data. Therefore, this paper introduces a slow feature constraint two-stage stacked autoencoder algorithm for dynamic process analysis. In the first stage, AE units aim to produce decorrelated and normalized signals through nonlinear expansion, with loss term focusing on the related properties. In the second stage, AE units serve to explore deep slow feature representations under constraints on variations of features. By fusing principles of SFA with the representational depth of SAE, the algorithm not only captures nonlinear relationships but also preserves crucial temporal dependencies within data, thereby providing more accurate insights for process monitoring. The proposed algorithm is validated in the vinyl acetate monomer process.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] An intelligent grinding burn detection system based on two-stage feature selection and stacked sparse autoencoder
    Weicheng Guo
    Beizhi Li
    Shouguo Shen
    Qinzhi Zhou
    The International Journal of Advanced Manufacturing Technology, 2019, 103 : 2837 - 2847
  • [2] An intelligent grinding burn detection system based on two-stage feature selection and stacked sparse autoencoder
    Guo, Weicheng
    Li, Beizhi
    Shen, Shouguo
    Zhou, Qinzhi
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2019, 103 (5-8): : 2837 - 2847
  • [3] A two-stage deep learning model based on feature combination effects
    Teng, Xuyang
    Zhang, Yunxiao
    He, Meilin
    Han, Meng
    Liu, Erxiao
    NEUROCOMPUTING, 2022, 512 : 307 - 322
  • [4] Multifractal analysis and stacked autoencoder-based feature learning method for multivariate processes monitoring
    Yu, Feng
    Liu, Jianchang
    Shang, Liangliang
    Liu, Dongming
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 4185 - 4190
  • [5] Scene Classification Based on Two-Stage Deep Feature Fusion
    Liu, Yishu
    Liu, Yingbin
    Ding, Liwang
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2018, 15 (02) : 183 - 186
  • [6] Stacked maximal quality-driven autoencoder: Deep feature representation for soft analyzer and its application on industrial processes
    Chen, Junming
    Fan, Shaosheng
    Yang, Chunhua
    Zhou, Can
    Zhu, Hongqiu
    Li, Yonggang
    INFORMATION SCIENCES, 2022, 596 : 280 - 303
  • [7] Dynamic model reduction for two-stage anaerobic digestion processes
    Duan, Zhaoyang
    Bournazou, Mariano Nicolas Cruz
    Kravaris, Costas
    CHEMICAL ENGINEERING JOURNAL, 2017, 327 : 1102 - 1116
  • [8] Two-stage multi-dimensional convolutional stacked autoencoder network model for hyperspectral images classification
    Yang Bai
    Xiyan Sun
    Yuanfa Ji
    Wentao Fu
    Jinli Zhang
    Multimedia Tools and Applications, 2024, 83 : 23489 - 23508
  • [9] Two-stage multi-dimensional convolutional stacked autoencoder network model for hyperspectral images classification
    Bai, Yang
    Sun, Xiyan
    Ji, Yuanfa
    Fu, Wentao
    Zhang, Jinli
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (8) : 23489 - 23508
  • [10] Deep Residual Learning-based Reconstruction of Stacked Autoencoder Representation
    Li, Honggui
    Trocan, Maria
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (ICECS), 2018, : 655 - 656