Choosing the Best Sensor Fusion Method: A Machine-Learning Approach

被引:31
作者
Brena, Ramon F. [1 ]
Aguileta, Antonio A. [1 ,2 ]
Trejo, Luis A. [3 ]
Molino-Minero-Re, Erik [4 ]
Mayora, Oscar [5 ]
机构
[1] Tecnol Monterrey, Ave Eugenio Garza Sada 2501 Sur, Monterrey 64849, Mexico
[2] Univ Autonoma Yucatan, Fac Matemat, Anillo Perifer Norte, Tablaje Cat 13615, Merida 97110, Mexico
[3] Tecnol Monterrey, Sch Sci & Engn, Carretera Lago Guadalupe Km 3-5, Atizapan De Zaragoza 52926, Mexico
[4] Univ Nacl Autonoma Mexico, Inst Invest Matemat Aplicadas & Sistemas Sede Mer, Unidad Acad Ciencias & Tecnol UNAM Yucatan, Sierra Papacal 97302, Mexico
[5] Fandaz Bruno Kessler Fdn, I-38123 Trento, Italy
关键词
optimal; data fusion; meta-data; sensor fusion; REJECTIVE MULTIPLE TEST; ACTIVITY RECOGNITION; MULTISENSOR FUSION; MULTIMODAL FUSION; MAJORITY; BEHAVIOR; MOBILE;
D O I
10.3390/s20082350
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Multi-sensor fusion refers to methods used for combining information coming from several sensors (in some cases, different ones) with the aim to make one sensor compensate for the weaknesses of others or to improve the overall accuracy or the reliability of a decision-making process. Indeed, this area has made progress, and the combined use of several sensors has been so successful that many authors proposed variants of fusion methods, to the point that it is now hard to tell which of them is the best for a given set of sensors and a given application context. To address the issue of choosing an adequate fusion method, we recently proposed a machine-learning data-driven approach able to predict the best merging strategy. This approach uses a meta-data set with the Statistical signatures extracted from data sets of a particular domain, from which we train a prediction model. However, the mentioned work is restricted to the recognition of human activities. In this paper, we propose to extend our previous work to other very different contexts, such as gas detection and grammatical face expression identification, in order to test its generality. The extensions of the method are presented in this paper. Our experimental results show that our extended model predicts the best fusion method well for a given data set, making us able to claim a broad generality for our sensor fusion method.
引用
收藏
页数:22
相关论文
共 69 条
[1]  
Adelsberger R, 2013, 2013 IEEE EIGHTH INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING, P271, DOI 10.1109/ISSNIP.2013.6529801
[2]   Multi-Sensor Fusion for Activity Recognition-A Survey [J].
Aguileta, Antonio A. ;
Brena, Ramon F. ;
Mayora, Oscar ;
Molino-Minero-Re, Erik ;
Trejo, Luis A. .
SENSORS, 2019, 19 (17)
[3]   Virtual Sensors for Optimal Integration of Human Activity Data [J].
Aguileta, Antonio A. ;
Brena, Ramon F. ;
Mayora, Oscar ;
Molino-Minero-Re, Erik ;
Trejo, Luis A. .
SENSORS, 2019, 19 (09)
[4]  
Altini M., 2012, Proceedings of the Conference on Wireless Health-WH, V12, P1, DOI [DOI 10.1145/2448096.2448097, 10.1145/2448096.2448097]
[5]   Comparative study on classifying human activities with miniature inertial and magnetic sensors [J].
Altun, Kerem ;
Barshan, Billur ;
Tuncel, Orkun .
PATTERN RECOGNITION, 2010, 43 (10) :3605-3620
[6]  
[Anonymous], 2013, APPL LOGISTIC REGRES
[7]  
[Anonymous], P 27 INT FLAIRS C PE
[8]  
[Anonymous], 2000, Pattern Classification, DOI DOI 10.1007/978-3-319-57027-3_4
[9]  
[Anonymous], 2011, MODELING SURVIVAL DA, DOI [DOI 10.1007/978-3-642-04898-2_370, 10.1007/978-3-642-04898-2]
[10]   Multimodal fusion for multimedia analysis: a survey [J].
Atrey, Pradeep K. ;
Hossain, M. Anwar ;
El Saddik, Abdulmotaleb ;
Kankanhalli, Mohan S. .
MULTIMEDIA SYSTEMS, 2010, 16 (06) :345-379