Implementing Visual Assistant using YOLO and SSD for Visually-Impaired Persons

Authors

Keywords: YOLO, SSD, Object detection, R-CNN, COCO

Abstract

Artificial Intelligence has been touted as the next big thing that is capable of altering the current landscape of the technological domain. Through the use of Artificial Intelligence and Machine Learning, pioneering work has been undertaken in the area of Visual and Object Detection. In this paper, we undertake the analysis of a Visual Assistant Application for Guiding Visually Impaired Individuals. With recent breakthroughs in computer vision and supervised learning models, the problem at hand has been reduced significantly to the point where new models are easier to build and implement than the already existing models. Different object detection models exist now that provide object tracking and detection with great accuracy. These techniques have been widely used in automating detection tasks in different areas. A few newly discovered detection approaches, such as the YOLO (You Only Look Once) and SSD (Single Shot Detector), have proved to be consistent and quite accurate at detecting objects in real-time. This paper attempts to utilize the combination of these state-of-the-art, real-time object detection techniques to develop a good base model. Paper also implements a 'Visual Assistant' for visually impaired people. The results obtained are improved and superior compared to existing algorithms.

Downloads

Published
07.03.2024
Issue
Section
Articles

How to Cite

Litoriya, R., Chandra Bandhu, K. ., Rajawat, I., Jagwani, H., & Yadav, C. (2024). Implementing Visual Assistant using YOLO and SSD for Visually-Impaired Persons. Journal of Automation, Mobile Robotics and Intelligent Systems, 17(4), 79-87. https://doi.org/10.14313/JAMRIS/4-2023/33