YOLO glass: video-based smart object detection using squeeze and attention YOLO network

被引:0
作者
T. Sugashini
G. Balakrishnan
机构
[1] Indra Ganesan College of Engineering,Department of Computer Science Engineering
来源
Signal, Image and Video Processing | 2024年 / 18卷
关键词
Visually impairment; Deep learning; Outdoor object detection; Wearable system;
D O I
暂无
中图分类号
学科分类号
摘要
Visually impairments or blindness people need guidance in order to avoid collision risks with outdoor obstacles. Recently, technology has been proving its presence in all aspects of human life, and new devices provide assistance to humans on a daily basis. However, due to real-time dynamics or a lack of specialized knowledge, object detection confronts a reliability difficulty. To overcome the challenge, YOLO Glass a Video-based Smart object detection model has been proposed for visually impaired person to navigate effectively in indoor and outdoor environments. Initially the captured video is converted into key frames and pre-processed using Correlation Fusion-based disparity approach. The pre-processed images were augmented to prevent overfitting of the trained model. The proposed method uses an obstacle detection system based on a Squeeze and Attendant Block YOLO Network model (SAB-YOLO). A proposed system assists visually impaired users in detecting multiple objects and their locations relative to their line of sight, and alerts them by providing audio messages via headphones. The system assists blind and visually impaired people in managing their daily tasks and navigating their surroundings. The experimental results show that the proposed system improves accuracy by 98.99%, proving that it can accurately identify objects. The detection accuracy of the proposed method is 5.15%, 7.15% and 9.7% better that existing YOLO v6, YOLO v5 and YOLO v3, respectively.
引用
收藏
页码:2105 / 2115
页数:10
相关论文
共 72 条
  • [1] Theodorou P(2022)An extended usability and UX evaluation of a mobile application for the navigation of individuals with blindness and visual impairments outdoors, an evaluation framework based on training Sensors 22 4538-130777
  • [2] Tsiligkos K(2021)An outdoor navigation assistance system for visually impaired people in public transportation IEEE Access 9 130767-598
  • [3] Meliones A(2019)An overview of assistive devices for blind and visually impaired people Int. J. Rob. Autom. 34 580-2828
  • [4] Filios C(2019)Developing walking assistants for visually impaired people: a review IEEE Sens. J. 19 2814-214
  • [5] Martínez-Cruz S(2020)Uncertainty-aware visual perception system for outdoor navigation of the visually challenged Sensors 20 2385-460
  • [6] Morales-Hernández LA(2022)LidSonic V.2 0: a LiDAR and deep-learning-based green assistive edge device to enhance mobility for the visually impaired Sensors 22 7435-170418
  • [7] Pérez-Soto GI(2018)Multimodal classification of stressful environments in visually impaired mobility using EEG and peripheral biosignals IEEE Trans. Affect. Comput. 12 203-17024
  • [8] Benitez-Rangel JP(2019)An astute assistive device for mobility and object recognition for visually impaired people IEEE Trans. Hum. Mach. Syst. 49 449-11807
  • [9] Camarillo-Gómez KA(2019)An indoor and outdoor navigation system for visually impaired people IEEE Access 7 170406-1155
  • [10] Hu M(2019)Identification of markers in challenging conditions for people with visual impairment using convolutional neural network Appl. Sci. 9 5110-undefined