Advertisement
Australia markets closed
  • ALL ORDS

    8,153.70
    +80.10 (+0.99%)
     
  • ASX 200

    7,896.90
    +77.30 (+0.99%)
     
  • AUD/USD

    0.6519
    -0.0016 (-0.25%)
     
  • OIL

    82.96
    +1.61 (+1.98%)
     
  • GOLD

    2,239.70
    +27.00 (+1.22%)
     
  • Bitcoin AUD

    108,580.48
    +2,587.24 (+2.44%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • AUD/EUR

    0.6036
    +0.0005 (+0.09%)
     
  • AUD/NZD

    1.0900
    +0.0020 (+0.18%)
     
  • NZX 50

    12,105.29
    +94.63 (+0.79%)
     
  • NASDAQ

    18,256.85
    -24.00 (-0.13%)
     
  • FTSE

    7,952.62
    +20.64 (+0.26%)
     
  • Dow Jones

    39,782.71
    +22.63 (+0.06%)
     
  • DAX

    18,492.49
    +15.40 (+0.08%)
     
  • Hang Seng

    16,541.42
    +148.58 (+0.91%)
     
  • NIKKEI 225

    40,168.07
    -594.66 (-1.46%)
     

Sanas aims to convert one accent to another in real time for smoother customer service calls

In the customer service industry, your accent dictates many aspects of your job. It shouldn't be the case that there's a "better" or "worse" accent, but in today's global economy (though who knows about tomorrow's) it's valuable to sound American or British. While many undergo accent neutralization training, Sanas is a startup with another approach (and a $5.5 million seed round): using speech recognition and synthesis to change the speaker's accent in near real time.

The company has trained a machine learning algorithm to quickly and locally (that is, without using the cloud) recognize a person's speech on one end and, on the other, output the same words with an accent chosen from a list or automatically detected from the other person's speech.

Screenshot of the Sanas desktop application.
Screenshot of the Sanas desktop application.

Image Credits: Sanas.ai

ADVERTISEMENT

It slots right into the OS's sound stack so it works out of the box with pretty much any audio or video calling tool. Right now the company is operating a pilot program with thousands of people in locations from the U.S. and U.K. to the Philippines, India, Latin America and others. Accents supported will include American, Spanish, British, Indian, Filipino and Australian by the end of the year.

To tell the truth, the idea of Sanas kind of bothered me at first. It felt like a concession to intolerant people who consider their accent superior and think others below them. Tech will fix it … by accommodating the intolerance. Great!

(Update: I want to clarify here, since some people have expressed frustration with it, that this above idea of the intolerant American consumer hanging up on someone with an accent was a gut reaction that I quickly rethought, as described below. But "bigoted" was perhaps too strong a word to employ, so I've swapped it out.)

But while I still have a little bit of that feeling, I can see there's more to it than this. Fundamentally speaking, it is easier to understand someone when they speak in an accent similar to your own. But customer service and tech support is a huge industry and one primarily performed by people outside the countries where the customers are. This basic communicative disconnect can be remedied in a way that puts the onus of responsibility on the entry-level worker (accent reduction training), or one that puts it on technology (accent reduction software). Either way the difficulty of making oneself understood remains and must be addressed — an automated system just lets it be done more easily and allows more people to do their job.

Although the fundamental concept is accomplished by the Sanas software, it's not magic — as you can tell in this clip, although the accent is removed, the character and cadence of the person's voice is only partly retained and the result is considerably more artificial sounding:

[youtube https://www.youtube.com/watch?v=ZTZ1T9VBa-Y?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent&w=640&h=360]

But the technology is improving and like any speech engine, the more it's used, the better it gets. And for someone not used to the original speaker's accent, the American-accented version may very well be more easily understood. For the person in the support role, this likely means better outcomes for their calls — everyone wins. Sanas told me that the pilots are just starting so there are no numbers available from this deployment yet, but testing has suggested a considerable reduction of error rates and increase in call efficiency.

It's good enough at any rate to attract a $5.5 million seed round, with participation from Human Capital, General Catalyst, Quiet Capital and DN Capital.

"Sanas is striving to make communication easy and free from friction, so people can speak confidently and understand each other, wherever they are and whoever they are trying to communicate with," CEO Maxim Serebryakov said in the press release announcing the funding. It's hard to disagree with that mission.

While the cultural and ethical questions of accents and power differentials are unlikely to ever go away, Sanas is trying something new that may be a powerful tool for the many people who must communicate professionally and find their speech patterns are an obstacle to that. It's an approach worth exploring and discussing even if in a perfect world we would simply understand one another better.