The Use of Multimodal Interaction for Supporting the Production of Location-Based Information

被引:0
作者
Horita, F. E. A. [1 ]
Braga, D. S. [2 ]
Monteiro, C. D. D. [3 ]
机构
[1] Univ Sao Paulo, ICMC, Ciencia Comp & Matemat Computac, Sao Carlos, SP, Brazil
[2] Univ Munster WWU, ERCIS, Munster, Germany
[3] Texas A&M Univ, Ciencia Comp & Informat, College Stn, TX USA
关键词
Location-based Information; Multimodal interactions; Gesture recognition; Speech recognition; Usability Analysis; INTERFACES; DESIGN;
D O I
10.1109/TLA.2016.7587660
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the increasingly volume of location-based information systems, it is a common practice to develop these technological resources aiming for the usage by professional content providers. Non-experts users appear as an important alternative of impartial contributions in this scenario. In this manner, multimodal interaction - particularly, gestures recognition and voice commands - can open further opportunities for support the usage by common users. This paper therefore aims to analyze the contributions on the use of multimodal interaction for supporting the production of location-based information. Its research methodology is based on a conceptual framework as well as usability analysis with real users. The results revealed that non-experts users often take a while before becoming confident with new kinds of interaction. It also shown that voice commands were rather more useful than gestures according to the users. Furthermore, even with long years of research in the field, technological resources still need improvements for the speech and gesture recognitions, mainly due to the idiosyncrasies presented on this interaction. Although additional research is still necessary in this area, it can be concluded that multimodal interactions have a great potential for supporting the production of location-based information.
引用
收藏
页码:3496 / 3502
页数:7
相关论文
共 20 条
[1]   Wandering: A Web-based platform for the creation of location-based interactive learning objects [J].
Barak, Miri ;
Ziv, Shani .
COMPUTERS & EDUCATION, 2013, 62 :159-170
[2]   A novel set of features for continuous hand gesture recognition [J].
Bhuyan, M. K. ;
Kumar, D. Ajay ;
MacDorman, Karl F. ;
Iwahori, Yuji .
JOURNAL ON MULTIMODAL USER INTERFACES, 2014, 8 (04) :333-343
[3]  
Chai J., 2004, P C INTELLIGENT USER, P70, DOI 10.1145/964442.964457
[4]  
Erazo O., 2014, IEEE LATIN AM T, V12
[5]  
Espinoza F., 2001, UBICOMP 2001 UBICOMP, P2, DOI DOI 10.1007/3-540-45427-6_2
[6]   Prototyping and evaluating glove-based multimodal interfaces [J].
Garcia, Julian ;
Molina, Jose P. ;
Martinez, Diego ;
Garcia, Arturo S. ;
Gonzalez, Pascual ;
Vanderdonckt, Jean .
JOURNAL ON MULTIMODAL USER INTERFACES, 2008, 2 (01) :43-52
[7]   Development of a spatial decision support system for flood risk management in Brazil that combines volunteered geographic information with wireless sensor networks [J].
Horita, Flavio E. A. ;
de Albuquerque, Joao Porto ;
Degrossi, Livia C. ;
Mendiondo, Eduardo M. ;
Ueyama, Jo .
COMPUTERS & GEOSCIENCES, 2015, 80 :84-94
[8]  
Janowski Kathrin, 2013, P TILB GEST RES M, P1
[9]   The think aloud method: a guide to user interface design [J].
Jaspers, MWM ;
Steen, T ;
van den Bos, C ;
Geenen, M .
INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2004, 73 (11-12) :781-795
[10]  
Johnston M., 2001, IEEE WORKSH AUT SPEE, P256