TS-HTFA: Advancing Time-Series Forecasting via Hierarchical Text-Free Alignment with Large Language Models

被引:0
作者
Wang, Pengfei [1 ]
Zheng, Huanran [1 ]
Xu, Qi'ao [1 ]
Dai, Silong [1 ]
Wang, Yiqiao [1 ]
Yue, Wenjing [1 ]
Zhu, Wei [1 ]
Qian, Tianwen [1 ]
Zhao, Liang [2 ]
机构
[1] East China Normal Univ, Sch Comp Sci & Technol, Shanghai 200062, Peoples R China
[2] Inspur Cloud Informat Technol Co Ltd, Jinan 250101, Peoples R China
来源
SYMMETRY-BASEL | 2025年 / 17卷 / 03期
关键词
large language models (LLMs); time-series analysis; cross-modal alignment;
D O I
10.3390/sym17030401
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Given the significant potential of large language models (LLMs) in sequence modeling, emerging studies have begun applying them to time-series forecasting. Despite notable progress, existing methods still face two critical challenges: (1) their reliance on large amounts of paired text data, limiting the model applicability, and (2) a substantial modality gap between text and time series, leading to insufficient alignment and suboptimal performance. This paper introduces Hierarchical Text-Free Alignment (TS-HTFA) a novel method that leverages hierarchical alignment to fully exploit the representation capacity of LLMs for time-series analysis while eliminating the dependence on text data. Specifically, paired text data are replaced with adaptive virtual text based on QR decomposition word embeddings and learnable prompts. Furthermore, comprehensive cross-modal alignment is established at three levels: input, feature, and output, contributing to enhanced semantic symmetry between modalities. Extensive experiments on multiple time-series benchmarks demonstrate that TS-HTFA achieves state-of-the-art performance, significantly improving prediction accuracy and generalization.
引用
收藏
页数:23
相关论文
共 55 条
  • [31] The M4 Competition: Results, findings, conclusion and way forward
    Makridakis, Spyros
    Spiliotis, Evangelos
    Assimakopoulos, Vassilios
    [J]. INTERNATIONAL JOURNAL OF FORECASTING, 2018, 34 (04) : 802 - 808
  • [32] Oreshkin BN, 2020, Arxiv, DOI [arXiv:1905.10437, 10.48550/arXiv.1905.10437, DOI 10.48550/ARXIV.1905.10437]
  • [33] Nie Y., 2023, P INT C LEARN REPR K
  • [34] DUMA: Dual Mask for Multivariate Time Series Anomaly Detection
    Pan, Jinwei
    Ji, Wendi
    Zhong, Bo
    Wang, Pengfei
    Wang, Xiaoling
    Chen, Jin
    [J]. IEEE SENSORS JOURNAL, 2023, 23 (03) : 2433 - 2442
  • [35] Pan Z., 2024, P 41 INT C MACH LEAR
  • [36] Pang Z., 2024, P 12 INT C LEARN REP
  • [37] Qiu XF, 2024, Arxiv, DOI [arXiv:2403.20150, 10.14778/3665844.3665863, DOI 10.14778/3665844.3665863]
  • [38] Radford A., 2019, OpenAI blog, V1, P9
  • [39] Shen J., 2023, P INT C MACH LEARN H
  • [40] Tan M., 2024, P NEURIPS VANC BC CA