Advertisement
Australia markets closed
  • ALL ORDS

    7,937.50
    -0.40 (-0.01%)
     
  • AUD/USD

    0.6505
    +0.0016 (+0.25%)
     
  • ASX 200

    7,683.00
    -0.50 (-0.01%)
     
  • OIL

    83.06
    -0.30 (-0.36%)
     
  • GOLD

    2,333.30
    -8.80 (-0.38%)
     
  • Bitcoin AUD

    101,494.05
    -337.48 (-0.33%)
     
  • CMC Crypto 200

    1,428.71
    +4.61 (+0.32%)
     

Google tests Project Relate, a voice recognition and synthesis app for people with speech impairments

Google is looking for help developing an Android app aimed at providing more communication options for people with speech impairments. Project Relate, as the effort and app is now called, will provide voice transcription and synthesis that could make it easier for users to be understood.

The project is descended from Project Euphonia, which we covered back in 2019 when it was first announced and later when the company published some of its research. The effort was spearheaded by Google research scientist Dimitri Kanevsky, who himself has impaired speech and brought firsthand knowledge to the AI-based solution. Now one of the project's main partners and users of the app is Aubrie Lee, who's on the marketing team there (she named the app) and due to muscular dystrophy has trouble being understood by both other people and apps. (You can see her in the video here or below.)

The simple fact is that speech recognition engines need lots of recorded speech to learn how to interpret it correctly, and that data is biased in favor of common speech patterns. People with accents aren't as well represented in these data sets, so they aren't understood as well — and people with speech impairments are even less commonly included, making it practically impossible for them to use common voice-powered devices.

Startups and improvements to the basic tech are improving the understanding of accented language, but it takes a special effort to collect and analyze the highly individualized speech patterns of those with impairments and disabilities. Every voice is different, but uncommon and unique patterns like those resulting from a stroke or injury can be tough for a machine learning system to understand reliably.

ADVERTISEMENT

Project Relate is at its core a better voice transcription tool for those with speech impairments. The "Listen" function turns the user's speech directly into text, so it can be pasted elsewhere or read by others. "Repeat" listens first then repeats what they've said in a voice that's hopefully clearer. "Assistant" basically forwards their transcribed speech directly to Google Assistant for common tasks like playing music or asking about the weather.

To enable these capabilities, the work at Google has been first to collect as much data as possible, and to that end the researchers note they've built a database of over a million speech samples by volunteers. This was used to train up what might be called the base level of intelligence for the speech-recognition AI. But like any other ML system, the more data — and the more specific that data is to the individual use case — the better.

[youtube https://www.youtube.com/watch?v=EU2oCVlzEZk?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent&w=640&h=360]

"We know the community of people with speech impairments is incredibly diverse and how people will interact with Project Relate may be different," said Julie Cattiau, Google Research product manager in an email to TechCrunch. "We want to avoid assuming what our target audience needs and the best way to do that is to build our product hand in hand with the people who will be using it. By engaging in testing with an initial group of people, we can better understand how our application will work for people in their daily lives, how accurate it will be and what areas of improvement there might be, before we expand to a broader audience."

The company is recruiting a first round of real-world testers to use the app regularly. The first step will be to record a set of phrases, which will be integrated with the speech model to better cater to their speech patterns. If you think this could be helpful in your everyday life, feel free to sign up as a potential volunteer and maybe you'll help make the app better for everyone.