AI & Technology in Public Speaking

Accents and AI: how speech recognition software could lead to new forms of discrimination

Anyone who has used a voice assistant such as Apple’s Siri or Amazon’s Alexa will have occasionally struggled to make themselves understood. Perhaps the device plays the wrong music, or puts unusual items on a shopping list, or emits a plaintive “didn’t quite catch that”. But for people who speak with an accent, these devices can be unusable.

The inability of speech recognition systems to understand accents found in Scotland, Turkey, the southern states of the US or any number of other places is widely documented on social media, and yet the problem persists. With uses of the technology now spreading beyond the domestic, researchers and academics are warning that biased systems could lead to new forms of discrimination, purely because of someone’s accent.

“It’s one of the questions that you don’t see big tech responding to,” says Halcyon Lawrence a professor of technical communication at Towson University in Maryland who is from Trinidad and Tobago. “There’s never a statement put out. There’s never a plan that’s articulated. And that’s because it’s not a problem for big tech. But it’s a problem for me, and large groups of people like me.”

Speech recognition systems can only recognise accents they’ve been trained to understand. To learn how to interpret the accent of someone from Trinidad, Eswatini or the UAE, a system needs voice data, along with an accurate transcription of that data, which inevitably has to be done by a human being. It’s a painstaking and expensive process to demonstrate to a machine what a particular word sounds like when it’s spoken by a particular community, and perhaps inevitably, existing data is heavily skewed towards English as typically spoken by white, highly educated Americans.

If you plot new accent releases on a map, you can’t help but notice that the Global South is not a consideration, despite the numbers of English speakers there

Halcyon Lawrence,
a professor of Technical Communication at Towson University in Maryland

A study called Racial Disparities in Automated Speech Recognition, published last year by researchers at Stanford University, illustrates the stark nature of the problem. It analysed systems developed by Amazon, Apple, Google, IBM and Microsoft, and found that in every case the error rates for black speakers were nearly double that of white people. In addition, it found that the errors were not caused by grammar, but by “phonological, phonetic, or prosodic characteristics”; in other words, accent.

Allison Koenecke, who led the study, believes that a two-fold improvement in the system is needed. “It needs resources to ethically collect data and ensure that the people working on these products are also diverse,” she says. “While tech companies may have the funds, they may not have known that they needed to prioritise this issue before external researchers shone a light on it.”

Lawrence, however, believes that the failings are no accident.

“What, for me, shows big tech’s intention is when they decide to release a new accent to the market and where that is targeted,” she says. “If you plot it on a map, you can’t help but notice that the Global South is not a consideration, despite the numbers of English speakers there. So you begin to see that this is an economic decision.”

It’s not only accented English that scupper speech recognition systems. Arabic poses a particular challenge – not simply because of the many sub-dialects, but inherent difficulties such as the lack of capital letters, recognising proper nouns and predicting a word’s vowels based on context. Substantial resources are being thrown at this problem, but the current situation is the same as with English: large communities technologically disenfranchised.

Why is this of particular concern? Beyond the world of smart speakers lies a much bigger picture. “There are many higher-stakes applications with much worse consequences if the underlying technologies are biased,” says Koenecke. “One example is court transcriptions, where court reporters are starting to use automatic speech recognition technologies. If they aren’t accurate at transcribing cases, you have obvious repercussions.”

Lawrence is particularly concerned about the way people drop their accent in order to be understood, rather than the technology working harder to understand them. “Accent bias is already practiced in our community,” she says. “There’s an expectation that we adapt our accent, and that’s what gets replicated in the device. It would not be an acceptable demand on somebody to change the colour of their skin, so why is it acceptable to demand we change our accents?”

Money, as ever, lies at the root of the problem. Lawrence believes strongly that the market can offer no solution, and that big tech has to be urged to look beyond its profit margin. “It’s one of the reasons why I believe that we’re going to see more and more smaller independent developers do this kind of work,” she says.

One of those developers, a British company called Speechmatics, is at the forefront, using what it calls “self-supervised learning” to introduce its speech recognition systems to a new world of voices.

If you have the right kind of diversity of data, it will learn to generalise across voices, latch on quickly and understand what’s going on

Will Williams,
vice president of Machine Learning

“We’re training on over a million hours of unlabelled audio, and constructing systems that can learn interesting things, autonomously run,” says Will Williams, vice president of machine learning at Speechmatics.

The crucial point: this is voice data that hasn’t been transcribed. “If you have the right kind of diversity of data, it will learn to generalise across voices, latch on quickly and understand what’s going on.” Using datasets from the Stanford study, Speechmatics has already reported a 45 per cent reduction in errors when using its system.

An organisation called ML Commons, which has Google and Microsoft as two of its more than 50 founding members, is now looking for new ways to create speech recognition systems that are accent-agnostic.

It’s a long road ahead, but Koenecke is optimistic. “Hopefully, as different speech-to-text companies decide to invest in more diverse data and more diverse teams of employees such as engineers and product managers, we will see something that reflects more closely what we see in real life.”

Our family matters legal consultant

Name: Hassan Mohsen Elhais

Position: legal consultant with Al Rowaad Advocates and Legal Consultants.

Ruwais timeline

1971 Abu Dhabi National Oil Company established

1980 Ruwais Housing Complex built, located 10 kilometres away from industrial plants

1982 120,000 bpd capacity Ruwais refinery complex officially inaugurated by the founder of the UAE Sheikh Zayed

1984 Second phase of Ruwais Housing Complex built. Today the 7,000-unit complex houses some 24,000 people.  

1985 The refinery is expanded with the commissioning of a 27,000 b/d hydro cracker complex

2009 Plans announced to build $1.2 billion fertilizer plant in Ruwais, producing urea

2010 Adnoc awards $10bn contracts for expansion of Ruwais refinery, to double capacity from 415,000 bpd

2014 Ruwais 261-outlet shopping mall opens

2014 Production starts at newly expanded Ruwais refinery, providing jet fuel and diesel and allowing the UAE to be self-sufficient for petrol supplies

2014 Etihad Rail begins transportation of sulphur from Shah and Habshan to Ruwais for export

2017 Aldar Academies to operate Adnoc’s schools including in Ruwais from September. Eight schools operate in total within the housing complex.

2018 Adnoc announces plans to invest $3.1 billion on upgrading its Ruwais refinery 

2018 NMC Healthcare selected to manage operations of Ruwais Hospital

2018 Adnoc announces new downstream strategy at event in Abu Dhabi on May 13

Source: The National

EMERGENCY PHONE NUMBERS

Estijaba – 8001717 –  number to call to request coronavirus testing

Ministry of Health and Prevention – 80011111

Dubai Health Authority – 800342 – The number to book a free video or voice consultation with a doctor or connect to a local health centre

Emirates airline – 600555555

Etihad Airways – 600555666

Ambulance – 998

Knowledge and Human Development Authority – 8005432 ext. 4 for Covid-19 queries

A little about CVRL

Founded in 1985 by Sheikh Mohammed bin Rashid, Vice President and Ruler of Dubai, the Central Veterinary Research Laboratory (CVRL) is a government diagnostic centre that provides testing and research facilities to the UAE and neighbouring countries.

One of its main goals is to provide permanent treatment solutions for veterinary related diseases. 

The taxidermy centre was established 12 years ago and is headed by Dr Ulrich Wernery. 

Killing of Qassem Suleimani
COMPANY%20PROFILE

%3Cp%3E%3Cstrong%3ECompany%20name%3A%3C%2Fstrong%3E%203S%20Money%3Cbr%3E%3Cstrong%3EStarted%3A%3C%2Fstrong%3E%202018%3Cbr%3E%3Cstrong%3EBased%3A%3C%2Fstrong%3E%20London%3Cbr%3E%3Cstrong%3EFounders%3A%3C%2Fstrong%3E%20Ivan%20Zhiznevsky%2C%20Eugene%20Dugaev%20and%20Andrei%20Dikouchine%3Cbr%3E%3Cstrong%3ESector%3A%3C%2Fstrong%3E%20FinTech%3Cbr%3E%3Cstrong%3EInvestment%20stage%3A%3C%2Fstrong%3E%20%245.6%20million%20raised%20in%20total%3C%2Fp%3E%0A


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button