When Search Engine Services Meet Large Language Models: Visions and Challenges

被引:12
作者
Xiong, Haoyi [1 ]
Bian, Jiang [1 ]
Li, Yuchen [1 ]
Li, Xuhong [1 ]
Du, Mengnan [2 ]
Wang, Shuaiqiang [1 ]
Yin, Dawei [1 ]
Helal, Sumi [3 ]
机构
[1] Baidu Inc, Beijing, Peoples R China
[2] New Jersey Inst Technol, Newark, NJ USA
[3] Univ Bologna, Bologna, Italy
关键词
Search engines; Accuracy; Training; Service computing; Indexing; Chatbots; Transformers; Large language models (LLMs); search engines; learning-to-rank (LTR); and retrieve-augmented generation (RAG);
D O I
10.1109/TSC.2024.3451185
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Combining Large Language Models (LLMs) with search engine services marks a significant shift in the field of services computing, opening up new possibilities to enhance how we search for and retrieve information, understand content, and interact with internet services. This paper conducts an in-depth examination of how integrating LLMs with search engines can mutually benefit both technologies. We focus on two main areas: using search engines to improve LLMs (Search4LLM) and enhancing search engine functions using LLMs (LLM4Search). For Search4LLM, we investigate how search engines can provide diverse high-quality datasets for pre-training of LLMs, how they can use the most relevant documents to help LLMs learn to answer queries more accurately, how training LLMs with Learning-To-Rank (LTR) tasks can enhance their ability to respond with greater precision, and how incorporating recent search results can make LLM-generated content more accurate and current. In terms of LLM4Search, we examine how LLMs can be used to summarize content for better indexing by search engines, improve query outcomes through optimization, enhance the ranking of search results by analyzing document relevance, and help in annotating data for learning-to-rank tasks in various learning contexts. However, this promising integration comes with its challenges, which include addressing potential biases and ethical issues in training models, managing the computational and other costs of incorporating LLMs into search services, and continuously updating LLM training with the ever-changing web content. We discuss these challenges and chart out required research directions to address them. We also discuss broader implications for service computing, such as scalability, privacy concerns, and the need to adapt search engine architectures for these advanced models.
引用
收藏
页码:4558 / 4577
页数:20
相关论文
共 190 条
[1]  
Abeysinghe B, 2024, Arxiv, DOI arXiv:2406.03339
[2]  
2023, Arxiv, DOI arXiv:2303.08774
[3]  
Agiza A, 2024, Arxiv, DOI arXiv:2404.08699
[4]   Autonomous agents modelling other agents: A comprehensive survey and open problems [J].
Albrecht, Stefano V. ;
Stone, Peter .
ARTIFICIAL INTELLIGENCE, 2018, 258 :66-95
[5]  
Alonso Omar, 2008, SIGIR Forum, V42, P9, DOI 10.1145/1480506.1480508
[6]  
Anand A, 2023, Arxiv, DOI arXiv:2306.16004
[7]  
[Anonymous], 2006, P 17 C HYP HYP, DOI [10.1145/1149941.1149957, DOI 10.1145/1149941.1149957]
[8]  
Bai Yuntao, 2022, arXiv
[9]  
Bajaj P, 2018, Arxiv, DOI [arXiv:1611.09268, DOI 10.48550/ARXIV.1611.09268]
[10]  
Bao KQ, 2024, Arxiv, DOI arXiv:2406.14900