Main Article Content
Vision loss in the elderly is a major health problem. In past years, the main choice of visually impaired people is a spectacular glasses. Today's assistive technology tools include not only better magnifying devices, but also advanced computing software, applications, and other use cases that use facial recognition, artificial intelligence, and other technologies to visibly improve the lives of the visually impaired. In these modern days, many developments have been made to improve the quality of life of the visually impaired. These people face many challenges in performing their daily tasks, including discovering themselves in an unfamiliar environmental circumstance. They cannot distinguish the objects around them and cannot accurately perceive the objects that around them most of the time. Keeping this in mind, With the advancement in the deep learning frameworks like computer vision techniques assist the visually impaired. In this paper, our goal is to providing the automated way that the people can get the knowledge of the facing objects in real-time by voice feedback. And also they get the position of the objects in front of them. We adopted the Mobilenet Single-Shot Detector, a deep learning multi-label detection model for classifying the objects. Besides, the model determines the position of object in the frame and finally generates an audio signal as output to notify the visually-impaired people. This system provides a novel way of assisting visually challenged people.