Microsoft has announced the implementation of a new feature in its Edge browser to improve the user experience of visually impaired people . In fact, Edge will now automatically generate captions so that the screen readers – a technology that reads the content of each page aloud – can help users to discover the meaning or intent of an image on the web.
Specifically, Microsoft will use Azure Cognitive Services to parse and describe images with missing “alt text”. When Edge detects an unlabeled image, it will send it to Microsoft servers for processing. Machine learning algorithms will work with the most common formats , such as JPEG, PNG, GIF, WEBP, and will provide descriptions in 5 languages.
Finally, Microsoft invites all authors of a site to provide alternative text: “It is important to reiterate that the alternative text provided by the author of a site will always be preferable to the alternate text generated automatically. Only the author of the site fully understands the context and intent of the image and its creative meaning and can provide the most relevant description “.