MIXTURE OF DEEP REGRESSION NETWORKS FOR HEAD POSE ESTIMATION

被引:0
作者
Huang, Yangguang [1 ]
Pan, Lili [1 ]
Zheng, Yali [1 ]
Xie, Mei [1 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Sichuan, Peoples R China
来源
2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | 2018年
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
multi-modal; mixture of experts;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Accurate and robust head pose estimation is a challenging computer vision task. In most existing methods, single-modal RGB or depth images are directly used for head pose estimation. The obvious drawbacks of these methods are two fold: (1) Traditional shallow models are not good at learning representative features. (2) They are single-modal approaches, resulting in sensitivity to noise. As such, in this work we propose a novel multi-modal regression model for head pose estimation, named mixture of deep regression networks (MoDRN). It only uses good examples for one modality to learn sub-network parameters. Thus, the sub-networks tend to be better trained and more robust to noise, making significant improved performance in their combination. Experiments on public datasets such as BIWI and BU-3DFE show the effectiveness of our approach.
引用
收藏
页码:4093 / 4097
页数:5
相关论文
共 50 条
  • [21] Multi-party focus of attention recognition in meetings from head pose and multimodal contextual cues
    Ba, Sileye O.
    Odobez, Jean-Marc
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 2221 - +
  • [22] Regression density estimation using smooth adaptive Gaussian mixtures
    Villani, Mattias
    Kohn, Robert
    Giordani, Paolo
    JOURNAL OF ECONOMETRICS, 2009, 153 (02) : 155 - 173
  • [23] UNIVERSAL IMAGE CODING APPROACH USING SPARSE STEERED MIXTURE-OF-EXPERTS REGRESSION
    Verhack, Ruben
    Sikora, Thomas
    Lange, Lieven
    Van Wallendael, Glenn
    Lambert, Peter
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 2142 - 2146
  • [24] New estimation and feature selection methods in mixture-of-experts models
    Khalili, Abbas
    CANADIAN JOURNAL OF STATISTICS-REVUE CANADIENNE DE STATISTIQUE, 2010, 38 (04): : 519 - 539
  • [25] Multi-Head multimodal deep interest recommendation network
    Yang, Mingbao
    Zhou, Peng
    Li, Shaobo
    Zhang, Yuanmeng
    Hu, Jianjun
    Zhang, Ansi
    KNOWLEDGE-BASED SYSTEMS, 2023, 276
  • [26] Approximation bounds for smooth functions in C(IRd) by neural and mixture networks
    Maiorov, V
    Meir, RS
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1998, 9 (05): : 969 - 978
  • [27] Fine-Grained Categorization Using a Mixture of Transfer Learning Networks
    Firsching, Justin
    Hashem, Sherif
    PROCEEDINGS OF THE FUTURE TECHNOLOGIES CONFERENCE (FTC) 2021, VOL 2, 2022, 359 : 151 - 158
  • [28] A novel deep Siamese framework for burned area mapping Leveraging mixture of experts
    Seydi, Seyd Teymoor
    Hasanlou, Mahdi
    Chanussot, Jocelyn
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [29] Surrogate modeling approximation using a mixture of experts based on EM joint estimation
    Dimitri Bettebghor
    Nathalie Bartoli
    Stéphane Grihon
    Joseph Morlier
    Manuel Samuelides
    Structural and Multidisciplinary Optimization, 2011, 43 : 243 - 259
  • [30] Surrogate modeling approximation using a mixture of experts based on EM joint estimation
    Bettebghor, Dimitri
    Bartoli, Nathalie
    Grihon, Stephane
    Morlier, Joseph
    Samuelides, Manuel
    STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, 2011, 43 (02) : 243 - 259