A Cornell University lab has developed eyeglasses that use AI and acoustic-sensing technology to recognize commands from the wearer without requiring them to speak.

 

A Cornell University lab has developed eyeglasses that use AI and acoustic-sensing technology to recognize commands from the wearer without requiring them to speak. The EchoSpeech glasses can "understand" up to 31 commands based on only mouth and lip movements.

  • The commands do not need to be vocalized, which could give patients that can't speak "their voices back," according to Cornell doctoral student Ruidong Zhang, lead author of the research paper on the glasses.
  • EchoSpeech is a "silent-speech recognition interface" that uses microphones, small speakers, and an AI-powered sonar system.
  • Soundwaves emitted from a person's mouth movements are received and processed using a deep learning algorithm.
  • The algorithm "analyzes these echo profiles in real-time," with a reported accuracy of around 95%.
  • The system was created by researchers at Cornell's Smart Computer Interfaces for Future Interactions (SciFi) Lab, who are looking into commercializing EchoSpeech's technology.
  • "It's small, low-power, and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world," SciFi Lab director Cheng Zhang said.



Post a Comment

Previous Next

Contact Form