End-to-end Symbolic Regression with Transformers

被引:0
|
作者
Kamienny, Pierre-Alexandre [1 ,2 ]
d'Ascoli, Stephane [1 ,3 ]
Lample, Guillaume [1 ]
Charton, Francois [1 ]
机构
[1] Meta AI, New York, NY 10003 USA
[2] Sorbonne Univ, ISIR MLIA, Paris, France
[3] Ecole Normale Super, Dept Phys, Paris, France
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Symbolic regression, the task of predicting the mathematical expression of a function from the observation of its values, is a difficult task which usually involves a two-step procedure: predicting the "skeleton" of the expression up to the choice of numerical constants, then fitting the constants by optimizing a non-convex loss function. The dominant approach is genetic programming, which evolves candidates by iterating this subroutine a large number of times. Neural networks have recently been tasked to predict the correct skeleton in a single try, but remain much less powerful. In this paper, we challenge this two-step procedure, and task a Transformer to directly predict the full mathematical expression, constants included. One can subsequently refine the predicted constants by feeding them to the non-convex optimizer as an informed initialization. We present ablations to show that this end-to-end approach yields better results, sometimes even without the refinement step. We evaluate our model on problems from the SRBench benchmark and show that our model approaches the performance of state-of-the-art genetic programming with several orders of magnitude faster inference.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] CaMo: Capturing the modularity by end-to-end models for Symbolic Regression
    Liu, Jingyi
    Wu, Min
    Yu, Lina
    Li, Weijun
    Li, Wenqiang
    Li, Yanjie
    Hao, Meilan
    Deng, Yusong
    Wei, Shu
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [2] SymFormer: End-to-End Symbolic Regression Using Transformer-Based Architecture
    Vastl, Martin
    Kulhanek, Jonas
    Kubalik, Jiri
    Derner, Erik
    Babuska, Robert
    IEEE ACCESS, 2024, 12 : 37840 - 37849
  • [3] TransVG: End-to-End Visual Grounding with Transformers
    Deng, Jiajun
    Yang, Zhengyuan
    Chen, Tianlang
    Zhou, Wengang
    Li, Houqiang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1749 - 1759
  • [4] End-to-end Lane Shape Prediction with Transformers
    Liu, Ruijin
    Yuan, Zejian
    Liu, Tie
    Xiong, Zhiliang
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3693 - 3701
  • [5] SYNCHRONOUS TRANSFORMERS FOR END-TO-END SPEECH RECOGNITION
    Tian, Zhengkun
    Yi, Jiangyan
    Bai, Ye
    Tao, Jianhua
    Zhang, Shuai
    Wen, Zhengqi
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7884 - 7888
  • [6] End-to-End Video Instance Segmentation with Transformers
    Wang, Yuqing
    Xu, Zhaoliang
    Wang, Xinlong
    Shen, Chunhua
    Cheng, Baoshan
    Shen, Hao
    Xia, Huaxia
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 8737 - 8746
  • [7] Cascade Transformers for End-to-End Person Search
    Yu, Rui
    Du, Dawei
    LaLonde, Rodney
    Davila, Daniel
    Funk, Christopher
    Hoogs, Anthony
    Clipp, Brian
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7257 - 7266
  • [8] End-to-End Human Pose and Mesh Reconstruction with Transformers
    Lin, Kevin
    Wang, Lijuan
    Liu, Zicheng
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1954 - 1963
  • [9] Chasing Sparsity in Vision Transformers: An End-to-End Exploration
    Chen, Tianlong
    Cheng, Yu
    Gan, Zhe
    Yuan, Lu
    Zhang, Lei
    Wang, Zhangyang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] End-to-End diagnosis of breast biopsy images with transformers
    Mehta, Sachin
    Lu, Ximing
    Wu, Wenjun
    Weaver, Donald
    Hajishirzi, Hannaneh
    Elmore, Joann G.
    Shapiro, Linda G.
    MEDICAL IMAGE ANALYSIS, 2022, 79