PromptCast: A New Prompt-Based Learning Paradigm for Time Series Forecasting

被引:14
作者
Xue, Hao [1 ]
Salim, Flora D. [1 ]
机构
[1] Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, Australia
关键词
Forecasting; Predictive models; Task analysis; Time series analysis; Numerical models; Natural languages; Benchmark testing; Time series forecasting; natural language generation; dataset and benchmark;
D O I
10.1109/TKDE.2023.3342137
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a new perspective on time series forecasting. In existing time series forecasting methods, the models take a sequence of numerical values as input and yield numerical values as output. The existing SOTA models are largely based on the Transformer architecture, modified with multiple encoding mechanisms to incorporate the context and semantics around the historical data. Inspired by the successes of pre-trained language foundation models, we pose a question about whether these models can also be adapted to solve time-series forecasting. Thus, we propose a new forecasting paradigm: prompt-based time series forecasting (PromptCast). In this novel task, the numerical input and output are transformed into prompts and the forecasting task is framed in a sentence-to-sentence manner, making it possible to directly apply language models for forecasting purposes. To support and facilitate the research of this task, we also present a large-scale dataset (PISA) that includes three real-world forecasting scenarios. We evaluate different SOTA numerical-based forecasting methods and language generation models. The benchmark results with various forecasting settings demonstrate the proposed PromptCast with language generation models is a promising research direction. Additionally, in comparison to conventional numerical-based forecasting, PromptCast shows a much better generalization ability under the zero-shot setting.
引用
收藏
页码:6851 / 6864
页数:14
相关论文
共 32 条
  • [1] Beltagy I, 2020, Arxiv, DOI arXiv:2004.05150
  • [2] Bommasani R., 2021, arXiv
  • [3] Chen W., 2021, P INT C NEUR INF PRO, P1
  • [4] ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS
    Clark, Kevin
    Luong, Minh-Thang
    Le, Quoc V.
    Manning, Christopher D.
    [J]. INFORMATION SYSTEMS RESEARCH, 2020,
  • [5] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [6] Dinh T, 2022, Arxiv, DOI arXiv:2206.06565
  • [7] Drouin A, 2022, PR MACH LEARN RES
  • [8] Li LH, 2022, Arxiv, DOI [arXiv:2112.03857, DOI 10.48550/ARXIV.2112.03857]
  • [9] Herdade S, 2019, ADV NEUR IN, V32
  • [10] Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]