Smartphone-based food recognition system using multiple deep CNN models

被引:31
作者
Fakhrou, Abdulnaser [1 ]
Kunhoth, Jayakanth [2 ]
Al Maadeed, Somaya [2 ]
机构
[1] Qatar Univ, Coll Educ, Dept Psychol Sci, Doha, Qatar
[2] Qatar Univ, Dept Comp Sci & Engn, Doha, Qatar
关键词
Food classification; Deep learning; Ensemble learning; Assistive system; Visual impairment;
D O I
10.1007/s11042-021-11329-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
People with blindness or low vision utilize mobile assistive tools for various applications such as object recognition, text recognition, etc. Most of the available applications are focused on recognizing generic objects. And they have not addressed the recognition of food dishes and fruit varieties. In this paper, we propose a smartphone-based system for recognizing the food dishes as well as fruits for children with visual impairments. The Smartphone application utilizes a trained deep CNN model for recognizing the food item from the real-time images. Furthermore, we develop a new deep convolutional neural network (CNN) model for food recognition using the fusion of two CNN architectures. The new deep CNN model is developed using the ensemble learning approach. The deep CNN food recognition model is trained on a customized food recognition dataset.The customized food recognition dataset consists of 29 varieties of food dishes and fruits. Moreover, we analyze the performance of multiple state of art deep CNN models for food recognition using the transfer learning approach. The ensemble model performed better than state of art CNN models and achieved a food recognition accuracy of 95.55 % in the customized food dataset. In addition to that, the proposed deep CNN model is evaluated in two publicly available food datasets to display its efficacy for food recognition tasks.
引用
收藏
页码:33011 / 33032
页数:22
相关论文
共 36 条
[1]   Food Recognition by Integrating Local and Flat Classifiers [J].
Aguilar, Eduardo ;
Radeva, Petia .
PATTERN RECOGNITION AND IMAGE ANALYSIS, PT I, 2020, 11867 :65-74
[2]   Regularized uncertainty-based multi-task learning model for food analysis [J].
Aguilar, Eduardo ;
Bolanos, Marc ;
Radeva, Petia .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 :360-370
[3]  
[Anonymous], 2009, The Open Rehabilitation Journal, DOI DOI 10.2174/1874943700902010011
[4]  
[Anonymous], 2020, CUST FOOD DAT
[5]  
[Anonymous], 2011, VIS IMP BLIND
[6]  
[Anonymous], 2015, 2015 International Conference on Computer, Communication and Control (IC4), DOI DOI 10.1109/IC4.2015.7375665
[7]   A Food Recognition System for Diabetic Patients Based on an Optimized Bag-of-Features Model [J].
Anthimopoulos, Marios M. ;
Gianola, Lauro ;
Scarnato, Luca ;
Diem, Peter ;
Mougiakakou, Stavroula G. .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2014, 18 (04) :1261-1271
[8]   Object Detection to Assist Visually Impaired People: A Deep Neural Network Adventure [J].
Bashiri, Fereshteh S. ;
LaRose, Eric ;
Badger, Jonathan C. ;
D'Souza, Roshan M. ;
Yu, Zeyun ;
Peissig, Peggy .
ADVANCES IN VISUAL COMPUTING, ISVC 2018, 2018, 11241 :500-510
[9]  
Bossard L, 2014, LECT NOTES COMPUT SC, V8694, P446, DOI 10.1007/978-3-319-10599-4_29
[10]  
Chincha R, 2011, IEEE INT C BIO BIO W, P526, DOI 10.1109/BIBMW.2011.6112423