TS-HTFA: Advancing Time-Series Forecasting via Hierarchical Text-Free Alignment with Large Language Models

被引:0
作者
Wang, Pengfei [1 ]
Zheng, Huanran [1 ]
Xu, Qi'ao [1 ]
Dai, Silong [1 ]
Wang, Yiqiao [1 ]
Yue, Wenjing [1 ]
Zhu, Wei [1 ]
Qian, Tianwen [1 ]
Zhao, Liang [2 ]
机构
[1] East China Normal Univ, Sch Comp Sci & Technol, Shanghai 200062, Peoples R China
[2] Inspur Cloud Informat Technol Co Ltd, Jinan 250101, Peoples R China
来源
SYMMETRY-BASEL | 2025年 / 17卷 / 03期
关键词
large language models (LLMs); time-series analysis; cross-modal alignment;
D O I
10.3390/sym17030401
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Given the significant potential of large language models (LLMs) in sequence modeling, emerging studies have begun applying them to time-series forecasting. Despite notable progress, existing methods still face two critical challenges: (1) their reliance on large amounts of paired text data, limiting the model applicability, and (2) a substantial modality gap between text and time series, leading to insufficient alignment and suboptimal performance. This paper introduces Hierarchical Text-Free Alignment (TS-HTFA) a novel method that leverages hierarchical alignment to fully exploit the representation capacity of LLMs for time-series analysis while eliminating the dependence on text data. Specifically, paired text data are replaced with adaptive virtual text based on QR decomposition word embeddings and learnable prompts. Furthermore, comprehensive cross-modal alignment is established at three levels: input, feature, and output, contributing to enhanced semantic symmetry between modalities. Extensive experiments on multiple time-series benchmarks demonstrate that TS-HTFA achieves state-of-the-art performance, significantly improving prediction accuracy and generalization.
引用
收藏
页数:23
相关论文
共 55 条
  • [1] Bai SJ, 2018, Arxiv, DOI [arXiv:1803.01271, 10.48550/arXiv.1803.01271]
  • [2] Cao DF, 2024, Arxiv, DOI arXiv:2310.04948
  • [3] Challu C., 2022, arXiv, DOI arXiv:2201.12886
  • [4] Chang C, 2023, Arxiv, DOI arXiv:2308.08469
  • [5] Chen PY, 2024, AAAI CONF ARTIF INTE, P22584
  • [6] Cuturi M., 2013, NEURIPS
  • [7] Das A., 2023, arXiv, DOI arXiv:2304.08424
  • [8] MULTIMODAL MULTI-VIEW SPECTRAL-SPATIAL-TEMPORAL MASKED AUTOENCODER FOR SELF-SUPERVISED EMOTION RECOGNITION
    Gao, Pengxuan
    Liu, Tianyu
    Liu, Jia-Wen
    Lu, Bao-Liang
    Zheng, Wei-Long
    [J]. 2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 1926 - 1930
  • [9] Guo D., 2025, arXiv
  • [10] Han W, 2023, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, P1789