AUTOMATIC SPEECH RECOGNITION TECHNOLOGY AND THE SUPPORT OF STUDENTS WITH SPECIAL NEEDS

被引:0
作者
Martinik, Ivo [1 ]
机构
[1] VSB Tech Univ Ostrava, Fac Econ, Ostrava, Czech Republic
来源
EFFICIENCY AND RESPONSIBILITY IN EDUCATION 2013 | 2013年
关键词
Rich-media; Automatic Speech Recognition; MERLINGO; students with special needs; SWOT analysis;
D O I
暂无
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Rich-media describes a broad range of digital interactive media that is increasingly used in the Internet and also for the support of education, where the complex rich-media visualization of the educational process becomes the necessity for the overall transfer of information from teacher to students. Rich-media technologies are used for the support of students with special needs mainly at the development of 'barrier-free' information access to records of presentations which are adapted to needs especially in students of locomotive, visual and aural disability. SWOT analysis of these services determined also future development of MERLINGO (MEdia-rich Repository of LearnING Objects) project and the MIN-MAX strategy of its progress was chosen. Therefore, new objectives of the project involving support of students with special needs have been specified and implemented. The stated needs are focused on the area of the automatic recognition of teacher's speech in real time and its transcription into a text form in order to support students with aural disability at lectures and practical training, followed by automation of sub-titling of video-records made by rich-media technologies and browsing in them in real time according to entered key words while using the programming system NovaVoice and Automatic Speech Recognition technology.
引用
收藏
页码:397 / 404
页数:8
相关论文
共 50 条
  • [21] Arabic Automatic Speech Recognition Enhancement
    Ahmed, Basem H. A.
    Ghabayen, Ayman S.
    2017 PALESTINIAN INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY (PICICT), 2017, : 98 - 102
  • [22] Automatic speech recognition in neurodegenerative disease
    Benjamin G. Schultz
    Venkata S. Aditya Tarigoppula
    Gustavo Noffs
    Sandra Rojas
    Anneke van der Walt
    David B. Grayden
    Adam P. Vogel
    International Journal of Speech Technology, 2021, 24 : 771 - 779
  • [23] Graphical models and automatic speech recognition
    Bilmes, JA
    MATHEMATICAL FOUNDATIONS OF SPEECH AND LANGUAGE PROCESSING, 2004, 138 : 191 - 245
  • [24] Automatic speech recognition in neurodegenerative disease
    Schultz, Benjamin G.
    Tarigoppula, Venkata S. Aditya
    Noffs, Gustavo
    Rojas, Sandra
    van der Walt, Anneke
    Grayden, David B.
    Vogel, Adam P.
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2021, 24 (03) : 771 - 779
  • [25] Allophones in Automatic Whispery Speech Recognition
    Kozierski, Piotr
    Sadalla, Talar
    Drgas, Szymon
    Dabrowski, Adam
    2016 21ST INTERNATIONAL CONFERENCE ON METHODS AND MODELS IN AUTOMATION AND ROBOTICS (MMAR), 2016, : 811 - 815
  • [26] Automatic speech recognition and training for severely dysarthric users of assistive technology: The STARDUST project
    Parker, M
    Cunningham, S
    Enderby, P
    Hawley, M
    Green, P
    CLINICAL LINGUISTICS & PHONETICS, 2006, 20 (2-3) : 149 - 156
  • [27] Developing and evaluating an oral skills training website supported by automatic speech recognition technology
    Chen, Howard Hao-Jan
    RECALL, 2011, 23 : 59 - 78
  • [28] Automatic Speech Recognition: An Improved Paradigm
    Topoleanu, Tudor-Sabin
    Mogan, Gheorghe Leonte
    TECHNOLOGICAL INNOVATION FOR SUSTAINABILITY, 2011, 349 : 269 - +
  • [29] Exploring the effects of automatic speech recognition technology on oral accuracy and fluency in a flipped classroom
    Jiang, Michael Yi-Chao
    Jong, Morris Siu-Yung
    Lau, Wilfred Wing-Fat
    Chai, Ching-Sing
    Wu, Na
    JOURNAL OF COMPUTER ASSISTED LEARNING, 2023, 39 (01) : 125 - 140
  • [30] Counterfactually Fair Automatic Speech Recognition
    Sari, Leda
    Hasegawa-Johnson, Mark
    Yoo, Chang D.
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 3515 - 3525