AI-powered smart glasses assist the blind and visually impaired 

Features such as additional languages and accuracy recognition have been added to Envision’s AI-powered smart glasses for the blind and visually impaired.

New functionalities include greater optical character recognition (OCR), improved text reading with contextual intelligence, the addition of new languages and the creation of a third-party app ecosystem to allow the easy integration of specialist services, such as indoor and outdoor navigation, to the Envision platform.  

The glasses were originally developed on the Enterprise Edition of Google Glass and debuted at the 2020 CSUN Conference. This week, the upgrades were announced at CSUN 2022. The glasses use AI to extract information from images and then says the images so that blind and partially sighted users can read documents at work, recognise friends, find personal belongings at home, use public transport and enjoy other freedoms seeing people take for granted. 

“Our mission is to improve the lives of the world’s two billion people who are blind or visually impaired by providing them with life-changing assistive technologies, products and services,” said Karthik Kannan, co-founder of Envision. “ 

Envision is available as an iOS and Android App and a Google Glass integration. It can read and translate any type of text (digital and handwritten) from any surface (e.g., food packaging, posters, timetables, computer display screens, barcodes) into over 60 languages. It can recognise faces, objects, and colours and even describe scenes. It can immediately connect users to trusted contacts through its Ally function for a completely private and secure video call built directly into the glasses. 

The features that have been introduced are document guidance for accurate capture. Instead of taking multiple images to fully capture a document’s complete text, this feature provides verbal instructions to guide users to position documents to the optimal scanning position allowing capture in a single motion.

Another new feature is layout detection. This provides a more realistic reading environment and put a document into context for the reader. Whether a column-based document such as a newspaper, a shop window poster, road sign, or restaurant menu, Envision will decipher the document layout and provide clear verbal guidance to the user. It recognises headlines, photo captions, for example, guiding them for a more natural flow to the audio read back of the document. 

Enhanced offline language capabilities now include Hindi, Japanese, Chinese and Korean languages, which can be accurately captured and read offline. There are not 26 languages supported offline and to more than 60 languages when connected.

Envision has added a third-party ecosystem allowing developers to build added value services to its users. The initial partner is the Cash Reader app, for Envision to recognise banknotes in over 100 currencies.  

The Ally function enables a user to ask for assistance or share experiences with trusted contacts (Allies) via completely private video calls directly from the glasses. The improved version is optimised for both mobile network and Wi-Fi hotspots. 

https://www.letsenvision.com

About Weartech

This news story is brought to you by weartechdesign.com, the specialist site dedicated to delivering information about what’s new in the wearable electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: weartechdesign.com