VirtuWander: Enhancing Multi-modal Interaction for Virtual Tour Guidance through Large Language Models

被引:6
作者
Wang, Zhan [1 ]
Yuan, Lin-Ping [2 ]
Wang, Liangwei [1 ]
Jiang, Bingchuan [3 ]
Zeng, Wei [1 ,2 ]
机构
[1] Hong Kong Univ Sci & Technol Guangzhou, Guangzhou, Peoples R China
[2] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[3] Strateg Support Force Informat Engn Univ, Zhengzhou, Peoples R China
来源
PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS, CHI 2024 | 2024年
基金
中国国家自然科学基金;
关键词
multi-modal feedback; large language models; virtual museum;
D O I
10.1145/3613904.3642235
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tour guidance in virtual museums encourages multi-modal interactions to boost user experiences, concerning engagement, immersion, and spatial awareness. Nevertheless, achieving the goal is challenging due to the complexity of comprehending diverse user needs and accommodating personalized user preferences. Informed by a formative study that characterizes guidance-seeking contexts, we establish a multi-modal interaction design framework for virtual tour guidance. We then design VirtuWander, a two-stage innovative system using domain-oriented large language models to transform user inquiries into diverse guidance-seeking contexts and facilitate multi-modal interactions. The feasibility and versatility of VirtuWander are demonstrated with virtual guiding examples that encompass various touring scenarios and cater to personalized preferences. We further evaluate VirtuWander through a user study within an immersive simulated museum. The results suggest that our system enhances engaging virtual tour experiences through personalized communication and knowledgeable assistance, indicating its potential for expanding into real-world scenarios.
引用
收藏
页数:20
相关论文
共 67 条
  • [1] VIRTUAL MUSEUMS AS A MEANS FOR PROMOTION AND ENHANCEMENT OF CULTURAL HERITAGE
    Aiello, D.
    Fai, S.
    Santagati, C.
    [J]. 27TH CIPA INTERNATIONAL SYMPOSIUM: DOCUMENTING THE PAST FOR A BETTER FUTURE, 2019, 42-2 (W15): : 33 - 40
  • [2] Lost in Style: Gaze-driven Adaptive Aid for VR Navigation
    Alghofaili, Rawan
    Sawahata, Yasuhito
    Huang, Haikun
    Wang, Hsueh-Cheng
    Shiratori, Takaaki
    Yu, Lap-Fai
    [J]. CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [3] Baxter Mitchell, 2021, INVESTIGATING IMPACT, DOI [10.1145/3469595.3469609, DOI 10.1145/3469595.3469609]
  • [4] A platform for virtual museums with personalized content
    Bonis, B.
    Stamos, J.
    Vosinakis, S.
    Andreou, I.
    Panayiotopoulos, T.
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2009, 42 (02) : 139 - 159
  • [5] A methodology for the evaluation of travel techniques for immersive virtual environments
    Bowman D.A.
    Koller D.
    Hodges L.F.
    [J]. Virtual Reality, 1998, 3 (2) : 120 - 131
  • [6] Reflecting on reflexive thematic analysis
    Braun, Virginia
    Clarke, Victoria
    [J]. QUALITATIVE RESEARCH IN SPORT EXERCISE AND HEALTH, 2019, 11 (04) : 589 - 597
  • [7] ChangYuan Li, 2019, 2019 International Conference on Electronic Engineering and Informatics (EEI). Proceedings, P213, DOI 10.1109/EEI48997.2019.00053
  • [8] Not just seeing, but also feeling art: Mid-air haptic experiences integrated in a multisensory art exhibition
    Chi Thanh Vi
    Ablart, Damien
    Gatti, Elia
    Velasco, Carlos
    Obrist, Marianna
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2017, 108 : 1 - 14
  • [9] Chiu Mao-Lin, 2000, P 5 C COMP AID ARCH, P471, DOI DOI 10.52842/CONF.CAADRIA.2000.471
  • [10] Christiano Paul F, 2017, NEURIPS, V30, P430