Learning Relevant Questions for Conversational Product Search using Deep Reinforcement Learning

被引:2
|
作者
Montazeralghaem, Ali [1 ]
Allan, James [1 ]
机构
[1] Univ Massachusetts, Ctr Intelligent Informat Retrieval, Coll Informat & Comp Sci, Amherst, MA 01003 USA
来源
WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING | 2022年
关键词
Conversational Product Search; Reinforcement Learning; Relevant Questions; Intelligent Assistants; GAME; GO;
D O I
10.1145/3488560.3498526
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose RelQuest, a conversational product search model based on reinforcement learning to generate questions from product descriptions in each round of the conversation, directly maximizing any desired metrics (i.e., the ultimate goal of the conversation), objectives, or even an arbitrary user satisfaction signal. By enabling systems to ask questions about user needs, conversational product search has gained increasing attention in recent years. Asking the right questions through conversations helps the system collect valuable feedback to create better user experiences and ultimately increase sales. In contrast, existing conversational product search methods are based on an assumption that there is a set of effectively pre-defined candidate questions for each product to be asked. Moreover, they make strong assumptions to estimate the value of questions in each round of the conversation. Estimating the true value of questions in each round of the conversation is not trivial since it is unknown. Experiments on real-world user purchasing data show the effectiveness of RelQuest at generating questions that maximize standard evaluation measures such as NDCG.
引用
收藏
页码:746 / 754
页数:9
相关论文
共 50 条
  • [1] Conversational Recommender System Using Deep Reinforcement Learning
    Sonie, Omprakash
    PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 718 - 719
  • [2] Enhancing Conversational Model With Deep Reinforcement Learning and Adversarial Learning
    Tran, Quoc-Dai Luong
    Le, Anh-Cuong
    Huynh, Van-Nam
    IEEE ACCESS, 2023, 11 : 75955 - 75970
  • [3] Learning to Ask: Conversational Product Search via Representation Learning
    Zou, Jie
    Huang, Jimmy
    Ren, Zhaochun
    Kanoulas, Evangelos
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (02)
  • [4] Controlling the Risk of Conversational Search via Reinforcement Learning
    Wang, Zhenduo
    Ai, Qingyao
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 1968 - 1977
  • [5] From Reinforcement Learning to Deep Reinforcement Learning: An Overview
    Agostinelli, Forest
    Hocquet, Guillaume
    Singh, Sameer
    Baldi, Pierre
    BRAVERMAN READINGS IN MACHINE LEARNING: KEY IDEAS FROM INCEPTION TO CURRENT STATE, 2018, 11100 : 298 - 328
  • [6] Deep learning, reinforcement learning, and world models
    Matsuo, Yutaka
    LeCun, Yann
    Sahani, Maneesh
    Precup, Doina
    Silver, David
    Sugiyama, Masashi
    Uchibe, Eiji
    Morimoto, Jun
    NEURAL NETWORKS, 2022, 152 : 267 - 275
  • [7] Transfer Learning in Deep Reinforcement Learning: A Survey
    Zhu, Zhuangdi
    Lin, Kaixiang
    Jain, Anil K.
    Zhou, Jiayu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 13344 - 13362
  • [8] Deep Reinforcement Learning in Medicine
    Jonsson, Anders
    KIDNEY DISEASES, 2019, 5 (01) : 18 - 22
  • [9] Bridge Bidding via Deep Reinforcement Learning and Belief Monte Carlo Search
    Qiu, Zizhang
    Wang, Shouguang
    You, Dan
    Zhou, MengChu
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (10) : 2111 - 2122
  • [10] Deep Reinforcement Learning for Autonomous Search and Rescue
    Zuluaga, Juan Gonzalo Carcamo
    Leidig, Jonathan P.
    Trefftz, Christian
    Wolffe, Greg
    NAECON 2018 - IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE, 2018, : 521 - 524