A Dataset for Interactive Vision-Language Navigation with Unknown Command Feasibility

被引:8
作者
Burns, Andrea [1 ]
Arsan, Deniz [2 ]
Agrawal, Sanjna [1 ]
Kumar, Ranjitha [2 ]
Saenko, Kate [1 ,3 ]
Plummer, Bryan A. [1 ]
机构
[1] Boston Univ, Boston, MA 02215 USA
[2] Univ Illinois, Champaign, IL 61820 USA
[3] MIT IBM Watson AI Lab, Cambridge, MA 02142 USA
来源
COMPUTER VISION, ECCV 2022, PT VIII | 2022年 / 13668卷
关键词
Vision-language navigation; Task feasibility; Mobile apps;
D O I
10.1007/978-3-031-20074-8_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision-language navigation (VLN), in which an agent follows language instruction in a visual environment, has been studied under the premise that the input command is fully feasible in the environment. Yet in practice, a request may not be possible due to language ambiguity or environment changes. To study VLN with unknown command feasibility, we introduce a new dataset Mobile app Tasks with Iterative Feedback (MoTIF), where the goal is to complete a natural language command in a mobile app. Mobile apps provide a scalable domain to study real downstream uses of VLN methods. Moreover, mobile app commands provide instruction for interactive navigation, as they result in action sequences with state changes via clicking, typing, or swiping. MoTIF is the first to include feasibility annotations, containing both binary feasibility labels and fine-grained labels for why tasks are unsatisfiable. We further collect follow-up questions for ambiguous queries to enable research on task uncertainty resolution. Equipped with our dataset, we propose the new problem of feasibility prediction, in which a natural language instruction and multimodal app environment are used to predict command feasibility. MoTIF provides a more realistic app dataset as it contains many diverse environments, high-level goals, and longer action sequences than prior work. We evaluate interactive VLN methods using MoTIF, quantify the generalization ability of current approaches to new app environments, and measure the effect of task feasibility on navigation performance.
引用
收藏
页码:312 / 328
页数:17
相关论文
共 45 条
  • [1] Privacy Concerns and Behaviors of People with Visual Impairments
    Ahmed, Tousif
    Hoyle, Roberto
    Connelly, Kay
    Crandall, David
    Kapadia, Apu
    [J]. CHI 2015: PROCEEDINGS OF THE 33RD ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2015, : 3523 - 3532
  • [2] Akter T, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1929
  • [3] Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
    Anderson, Peter
    Wu, Qi
    Teney, Damien
    Bruce, Jake
    Johnson, Mark
    Sunderhauf, Niko
    Reid, Ian
    Gould, Stephen
    van den Hengel, Anton
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3674 - 3683
  • [4] Appalaraju S., 2021, 2021 IEEECVF INT C C
  • [5] Blukis V., 2021, PERSISTENT SPATIAL S
  • [6] Bojanowski P., 2017, Trans. Assoc. Comput. Linguistics, V5, P135, DOI [DOI 10.1162/TACLA00051, 10.1162/tacl_a_00051, DOI 10.1162/TACL_A_00051]
  • [7] Conneau Alexis, 2018, INT C LEARNING REPRE
  • [8] Embodied Question Answering
    Das, Abhishek
    Datta, Samyak
    Gkioxari, Georgia
    Lee, Stefan
    Parikh, Devi
    Batra, Dhruv
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1 - 10
  • [9] Rico: A Mobile App Dataset for Building Data-Driven Design Applications
    Deka, Biplab
    Huang, Zifeng
    Franzen, Chad
    Hibschman, Joshua
    Afergan, Daniel
    Li, Yang
    Nichols, Jeffrey
    Kumar, Ranjitha
    [J]. UIST'17: PROCEEDINGS OF THE 30TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, 2017, : 845 - 854
  • [10] ERICA: Interaction Mining Mobile Apps
    Deka, Biplab
    Huang, Zifeng
    Kumar, Ranjitha
    [J]. UIST 2016: PROCEEDINGS OF THE 29TH ANNUAL SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, 2016, : 767 - 776