Learning Relevant Questions for Conversational Product Search using Deep Reinforcement Learning

被引:2
|
作者
Montazeralghaem, Ali [1 ]
Allan, James [1 ]
机构
[1] Univ Massachusetts, Ctr Intelligent Informat Retrieval, Coll Informat & Comp Sci, Amherst, MA 01003 USA
来源
WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING | 2022年
关键词
Conversational Product Search; Reinforcement Learning; Relevant Questions; Intelligent Assistants; GAME; GO;
D O I
10.1145/3488560.3498526
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose RelQuest, a conversational product search model based on reinforcement learning to generate questions from product descriptions in each round of the conversation, directly maximizing any desired metrics (i.e., the ultimate goal of the conversation), objectives, or even an arbitrary user satisfaction signal. By enabling systems to ask questions about user needs, conversational product search has gained increasing attention in recent years. Asking the right questions through conversations helps the system collect valuable feedback to create better user experiences and ultimately increase sales. In contrast, existing conversational product search methods are based on an assumption that there is a set of effectively pre-defined candidate questions for each product to be asked. Moreover, they make strong assumptions to estimate the value of questions in each round of the conversation. Estimating the true value of questions in each round of the conversation is not trivial since it is unknown. Experiments on real-world user purchasing data show the effectiveness of RelQuest at generating questions that maximize standard evaluation measures such as NDCG.
引用
收藏
页码:746 / 754
页数:9
相关论文
共 50 条
  • [21] Computationally Efficient DNN Mapping Search Heuristic using Deep Reinforcement Learning
    Bakshi, Suyash
    Johnsson, Lennart
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2023, 22 (05)
  • [22] The Advance of Reinforcement Learning and Deep Reinforcement Learning
    Lyu, Le
    Shen, Yang
    Zhang, Sicheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, BIG DATA AND ALGORITHMS (EEBDA), 2022, : 644 - 648
  • [23] Focus On Scene Text Using Deep Reinforcement Learning
    Wang, Haobin
    Huang, Shuangping
    Jin, Lianwen
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 3759 - 3765
  • [24] Molecular Autonomous Pathfinder Using Deep Reinforcement Learning
    Nomura, Ken-ichi
    Mishra, Ankit
    Sang, Tian
    Kalia, Rajiv K.
    Nakano, Aiichiro
    Vashishta, Priya
    JOURNAL OF PHYSICAL CHEMISTRY LETTERS, 2024, 15 (19): : 5288 - 5294
  • [25] Deep Reinforcement Learning Using Optimized Monte Carlo Tree Search in EWN
    Zhang, Yixian
    Li, Zhuoxuan
    Cao, Yiding
    Zhao, Xuan
    Cao, Jinde
    IEEE TRANSACTIONS ON GAMES, 2024, 16 (03) : 544 - 555
  • [26] Deep multiagent reinforcement learning: challenges and directions
    Wong, Annie
    Back, Thomas
    Kononova, Anna, V
    Plaat, Aske
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (06) : 5023 - 5056
  • [27] Deep Reinforcement Learning for Autonomous Driving: A Survey
    Kiran, B. Ravi
    Sobh, Ibrahim
    Talpaert, Victor
    Mannion, Patrick
    Al Sallab, Ahmad A.
    Yogamani, Senthil
    Perez, Patrick
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (06) : 4909 - 4926
  • [28] Coevolutionary Deep Reinforcement Learning
    Cotton, David
    Traish, Jason
    Chaczko, Zenon
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 2600 - 2607
  • [29] Deep Reinforcement Learning for Mobile Robot Navigation
    Gromniak, Martin
    Stenzel, Jonas
    2019 4TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS (ACIRS 2019), 2019, : 68 - 73
  • [30] Opportunistic maintenance scheduling with deep reinforcement learning
    Valet, Alexander
    Altenmueller, Thomas
    Waschneck, Bernd
    May, Marvin Carl
    Kuhnle, Andreas
    Lanza, Gisela
    JOURNAL OF MANUFACTURING SYSTEMS, 2022, 64 : 518 - 534