Keyword Localisation in Untranscribed Speech Using Visually Grounded Speech Models

被引:3
作者
Olaleye, Kayode [1 ]
Oneata, Dan [2 ]
Kamper, Herman [1 ]
机构
[1] Stellenbosch Univ, ZA-26697 Stellenbosch, South Africa
[2] Univ Politehn Bucuresti, RO-060042 Bucharest, Romania
基金
新加坡国家研究基金会;
关键词
Visually grounded speech models; keyword localisation; keyword spotting; self-supervised learning; SPOKEN LANGUAGE; ATTENTION; DATASETS;
D O I
10.1109/JSTSP.2022.3180220
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Keyword localisation is the task of finding where in a speech utterance a given query keyword occurs. We investigate to what extent keyword localisation is possible using a visually grounded speech (VGS) model. VGS models are trained on unlabelled images paired with spoken captions. These models are therefore self-supervised-trained without any explicit textual label or location information. To obtain training targets, we first tag training images with soft text labels using a pretrained visual classifier with a fixed vocabulary. This enables a VGS model to predict the presence of a written keyword in an utterance, but not its location. We consider four ways to equip VGS models with localisations capabilities. Two of these-a saliency approach and input masking-can be applied to an arbitrary prediction model after training, while the other two-attention and a score aggregation approach-are incorporated directly into the structure of the model. Masked-based localisation gives some of the best reported localisation scores from a VGS model, with an accuracy of 57% when the system knows that a keyword occurs in an utterance and need to predict its location. In a setting where localisation is performed after detection, an F1 of 25% is achieved, and in a setting where a keyword spotting ranking pass is first performed, a localisation P@10 of 32% is obtained. While these scores are modest compared to the idealised setting with unordered bag-ofword-supervision (from transcriptions), these VGS models do not receive any textual or location supervision. Further analyses show that these models are limited by the first detection or ranking pass. Moreover, individual keyword localisation performance is correlated with the tagging performance from the visual classifier. We also showqualitatively howandwhere semantic mistakes occur, e.g. that the model locates surfer when queried with ocean.
引用
收藏
页码:1454 / 1466
页数:13
相关论文
共 104 条
  • [1] Alayrac JB, 2020, ADV NEUR IN, V33
  • [2] Emotion Recognition in Speech using Cross-Modal Transfer in the Wild
    Albanie, Samuel
    Nagrani, Arsha
    Vedaldi, Andrea
    Zisserman, Andrew
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 292 - 301
  • [3] Alwassel H, 2020, ADV NEUR IN, V33
  • [4] Aytar Y, 2016, ADV NEUR IN, V29
  • [5] Ba LJ, 2014, ADV NEUR IN, V27
  • [6] Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures
    Bernardi, Raffaella
    Cakici, Ruket
    Elliott, Desmond
    Erdem, Aykut
    Erdem, Erkut
    Ikizler-Cinbis, Nazli
    Keller, Frank
    Muscat, Adrian
    Plank, Barbara
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2016, 55 : 409 - 442
  • [7] Sparse Transcription
    Bird, Steven
    [J]. COMPUTATIONAL LINGUISTICS, 2021, 46 (04) : 713 - 744
  • [8] THE NATURE AND STRUCTURE OF INFANT FORM CATEGORIES
    BOMBA, PC
    SIQUELAND, ER
    [J]. JOURNAL OF EXPERIMENTAL CHILD PSYCHOLOGY, 1983, 35 (02) : 294 - 328
  • [9] ACORNS - towards computational modeling of communication and recognition skills
    Boves, Lou
    ten Bosch, Louis
    Moore, Roger
    [J]. PROCEEDINGS OF THE SIXTH IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS, 2007, : 349 - +
  • [10] Where Should Saliency Models Look Next?
    Bylinskii, Zoya
    Recasens, Adria
    Borji, Ali
    Oliva, Aude
    Torralba, Antonio
    Durand, Fredo
    [J]. COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 : 809 - 824