QUERT: Continual Pre-training of Language Model for Query Understanding in Travel Domain Search
被引:0
|
作者:
Xie, Jian
论文数: 0引用数: 0
h-index: 0
机构:
Fudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R ChinaFudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R China
Xie, Jian
[1
]
Liang, Yidan
论文数: 0引用数: 0
h-index: 0
机构:
Alibaba Grp, Hangzhou, Peoples R ChinaFudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R China
Liang, Yidan
[2
]
Liu, Jingping
论文数: 0引用数: 0
h-index: 0
机构:
East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai, Peoples R ChinaFudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R China
Liu, Jingping
[3
]
Xiao, Yanghua
论文数: 0引用数: 0
h-index: 0
机构:
Fudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R ChinaFudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R China
Xiao, Yanghua
[1
]
Wu, Baohua
论文数: 0引用数: 0
h-index: 0
机构:
Alibaba Grp, Hangzhou, Peoples R ChinaFudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R China
Wu, Baohua
[2
]
Ni, Shenghua
论文数: 0引用数: 0
h-index: 0
机构:
Alibaba Grp, Hangzhou, Peoples R ChinaFudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R China
Ni, Shenghua
[2
]
机构:
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R China
[2] Alibaba Grp, Hangzhou, Peoples R China
[3] East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai, Peoples R China
来源:
PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023
|
2023年
In light of the success of the pre-trained language models (PLMs), continual pre-training of generic PLMs has been the paradigm of domain adaption. In this paper, we propose QUERT, A Continual Pre-trained Language Model for QUERy Understanding in Travel Domain Search. QUERT is jointly trained on four tailored pre-training tasks to the characteristics of query in travel domain search: Geography-aware Mask Prediction, Geohash Code Prediction, User Click Behavior Learning, and Phrase and Token Order Prediction. Performance improvement of downstream tasks and ablation experiment demonstrate the effectiveness of our proposed pre-training tasks. To be specific, the average performance of downstream tasks increases by 2.02% and 30.93% in supervised and unsupervised settings, respectively. To check on the improvement of QUERT to online business, we deploy QUERT and perform A/B testing on Fliggy APP. The feedback results show that QUERT increases the Unique Click-Through Rate and Page Click-Through Rate by 0.89% and 1.03% when applying QUERT as the encoder. Resources are available at https://github.com/hsaest/QUERT.