— Today, many visually impaired people find it difficult to identify everyday consumer products on supermarket shelves. While touch and hearing can differentiate between most products, some differentiations are still complex to make. For example, the difference between apple juice and orange juice can only be made with the label on the bottle in front of your eyes.
— The objective of this website is to use artificial intelligence in order to transcribe visual elements into vocal elements. The visually impaired person will use their phone to take a picture of the product they cannot distinguish, this picture will be analyzed by an algorithm that will say aloud the name of the object in the picture.
This is where artificial intelligence comes into play. We will use machine learning and in particular supervised learning, which is about making predictions with a labelled dataset. We are going to use the ml5.js library for this purpose, which provides access to machine learning algorithms and models in the browser. Why ? Because Ml5.js is using supervised Machine Learning: large datasets with classifications given by humans are used to create models that ML algorithms will use to determine classifications of future inputs. We are going to use the MobileNets algorythm, MobileNets are a class of convolutional neural network which are trained to recognize images.
Java Script code of this webpage :
Similar works :
• Application "Be My Eyes"
• Use of the Yuka application in a non-compliant way by the use of barcodes (the interesting part of the video starts at 1min28)
• Seeing AI : The application will identify your environment, describe an object or a person (which you will be able to name so that it is permanently recognized as a relative by the AI), read the barcodes of a product to give you information, read a text (up to 5 recognized languages) and even decipher handwriting.