A Deep Convolutional Neural Network for Location Recognition and Geometry based Information

被引:2
|
作者
Bidoia, Francesco [1 ]
Sabatelli, Matthia [1 ,2 ]
Shantia, Amirhossein [1 ]
Wiering, Marco A. [1 ]
Schomaker, Lambert [1 ]
机构
[1] Univ Groningen, Inst Artificial Intelligence & Cognit Engn, Groningen, Netherlands
[2] Univ Liege, Dept Elect Engn & Comp Sci, Montefiore Inst, Liege, Belgium
来源
PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM 2018) | 2018年
关键词
Deep Convolutional Neural Network; Image Recognition; Geometry Invariance; Autonomous Navigation Systems; NAVIGATION; ROBOTS;
D O I
10.5220/0006542200270036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we propose a new approach to Deep Neural Networks (DNNs) based on the particular needs of navigation tasks. To investigate these needs we created a labeled image dataset of a test environment and we compare classical computer vision approaches with the state of the art in image classification. Based on these results we have developed a new DNN architecture that outperforms previous architectures in recognizing locations, relying on the geometrical features of the images. In particular we show the negative effects of scale, rotation, and position invariance properties of the current state of the art DNNs on the task. We finally show the results of our proposed architecture that preserves the geometrical properties. Our experiments show that our method outperforms the state of the art image classification networks in recognizing locations.
引用
收藏
页码:27 / 36
页数:10
相关论文
共 50 条
  • [31] Application of Deep Convolutional Neural Network Under Region Proposal Network in Patent Graphic Recognition and Retrieval
    Li, Ming
    Li, Hui
    IEEE ACCESS, 2022, 10 : 37829 - 37838
  • [32] Ethiopian sign language recognition using deep convolutional neural network
    Bekalu Tadele Abeje
    Ayodeji Olalekan Salau
    Abreham Debasu Mengistu
    Nigus Kefyalew Tamiru
    Multimedia Tools and Applications, 2022, 81 : 29027 - 29043
  • [33] Bengali Sign Language Recognition Using Deep Convolutional Neural Network
    Hossen, M. A.
    Govindaiah, Arun
    Sultana, Sadia
    Bhuiyan, Alauddin
    2018 JOINT 7TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2018 2ND INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR), 2018, : 369 - 373
  • [34] Ethiopian sign language recognition using deep convolutional neural network
    Abeje, Bekalu Tadele
    Salau, Ayodeji Olalekan
    Mengistu, Abreham Debasu
    Tamiru, Nigus Kefyalew
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (20) : 29027 - 29043
  • [35] Recognition Effects of Deep Convolutional Neural Network on Smudged Handwritten Digits
    Xu, Zhe
    Terada, Yusuke
    Jia, Dongbao
    Cai, Zonghui
    Gao, Shangce
    2018 5TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2018), 2018, : 412 - 416
  • [36] Tea disease recognition technology based on a deep convolutional neural network feature learning method
    Feng, Yuhan
    INTERNATIONAL JOURNAL OF COMPUTING SCIENCE AND MATHEMATICS, 2024, 19 (01) : 15 - 27
  • [37] Wetland Type Information Extraction Using Deep Convolutional Neural Network
    Liu, Xiaolan
    Wu, Dayong
    Wang, Hongzhi
    Liu, Jianxiao
    JOURNAL OF COASTAL RESEARCH, 2020, : 526 - 529
  • [38] Radar Signal Intra-Pulse Modulation Recognition Based on Convolutional Denoising Autoencoder and Deep Convolutional Neural Network
    Qu, Zhiyu
    Wang, Wenyang
    Hou, Changbo
    Hou, Chenfan
    IEEE ACCESS, 2019, 7 : 112339 - 112347
  • [39] Speech Emotion Recognition Using Generative Adversarial Network and Deep Convolutional Neural Network
    Kishor Bhangale
    Mohanaprasad Kothandaraman
    Circuits, Systems, and Signal Processing, 2024, 43 : 2341 - 2384
  • [40] Speech Emotion Recognition Using Generative Adversarial Network and Deep Convolutional Neural Network
    Bhangale, Kishor
    Kothandaraman, Mohanaprasad
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2024, 43 (04) : 2341 - 2384