KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-Based Implicit Neural Representation

被引:0
|
作者
Han, Jun [1 ,2 ]
Zheng, Hao [3 ,4 ]
Bi, Chongke [5 ]
机构
[1] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
[2] Hong Kong Univ Sci & Technol, Hong Kong 999077, Peoples R China
[3] Univ Notre Dame, Notre Dame, IN 46556 USA
[4] Univ Louisiana Lafayette, Sch Comp & Informat, Lafayette, LA 70504 USA
[5] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
Time-varying data compression; implicit neural representation; knowledge distillation; volume visualization; MULTILEVEL TECHNIQUES; SUPERRESOLUTION; REDUCTION;
D O I
10.1109/TVCG.2945
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.
引用
收藏
页码:6826 / 6838
页数:13
相关论文
共 6 条
  • [1] STSR-INR: Spatiotemporal super-resolution for multivariate time-varying volumetric data via implicit neural representation
    Tang, Kaiyuan
    Wang, Chaoli
    COMPUTERS & GRAPHICS-UK, 2024, 119
  • [2] Adaptive Volumetric Data Compression Based on Implicit Neural Representation
    Yang, Yumeng
    Jiao, Chenyue
    Gao, Xin
    Tian, Xiaoxian
    Bi, Chongke
    17TH INTERNATIONAL SYMPOSIUM ON VISUAL INFORMATION COMMUNICATION AND INTERACTION, VINCI 2024, 2024,
  • [3] ECNR: Efficient Compressive Neural Representation of Time-Varying Volumetric Datasets
    Tang, Kaiyuan
    Wang, Chaoli
    2024 IEEE 17TH PACIFIC VISUALIZATION CONFERENCE, PACIFICVIS, 2024, : 72 - 81
  • [4] CoordNet: Data Generation and Visualization Generation for Time-Varying Volumes via a Coordinate-Based Neural Network
    Han, Jun
    Wang, Chaoli
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2023, 29 (12) : 4951 - 4963
  • [5] Efficient Neural Data Compression for Machine Type Communications via Knowledge Distillation
    Hussien, Mostafa
    Xu, Yi Tian
    Wu, Di
    Liu, Xue
    Dudek, Gregory
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1169 - 1174
  • [6] Multi-resolution volume rendering of large time-varying data using video-based compression
    Ko, Chia-Lin
    Liao, Horng-Shyang
    Wang, Tsai-Pei
    Fu, Kuang-Wei
    Lin, Ching-Yao
    Chuang, Jung-Hong
    IEEE PACIFIC VISUALISATION SYMPOSIUM 2008, PROCEEDINGS, 2008, : 135 - +