I live in New York, a city I love because there is an endless variety of people to meet and places to explore. Before joining AppTek, I spent several years in management consulting at Deloitte, where I worked with clients from a range of industries, including tech. I was also particularly passionate about purpose-driven business—I helped stand up Deloitte’s Purpose Office in 2021, worked on internal diversity and equity initiatives, and served numerous nonprofits in New York and abroad. Outside of work, I love reading and writing fiction and poetry, so I’ve always been a bit of a language nerd, even before entering the HLT world.
While in consulting, I had observed the advances that tech was accelerating in every field—the exciting and the scary—and I was interested in being a positive contributor to that growth. I’m focused on AI’s ability to change people’s everyday lives for the better if we use it right. At AppTek, I enjoy helping to build technology that aids communication across different communities irrespective of language, accent, background, ability status, and more. As VP of Data Operations, I manage the data we use to train our HLT machine learning models to ensure they are unbiased and can effectively support a variety of use cases.
Just like humans develop biases by learning from the imperfect world around us, machine learning models are not inherently objective. Models are trained by feeding them a dataset of examples, and human involvement in the creation, provision, and/or curation of this data make a model's predictions susceptible to bias.
HLT models are now often a part of applying to jobs, navigating modern vehicles, conducting translation in immigration processes, and grading students. Bias in algorithms has widespread repercussions that must be addressed.
AI reflects the world we live in, which is a biased world. For example, most early speech corpora that major ASR systems are trained on, such as Switchboard, were skewed toward a certain demographic. TED Talks are often used in HLT training datasets too, but over half of these speeches are given by a similar demographic group. Even within a single ethnic group, region, or community, the way one part of the population pronounces words or frames their sentences can vary greatly. Without a diverse dataset, the technology cannot learn to understand different types of speakers.
There are additional factors besides diversity that lead to biased results. You can take the example of some text-generating systems that tended to make biased statements—most likely because of the discriminatory speech found in the vast amount of internet text that trained them. Results show that after re-training with additional curated data, newer versions of text generators can be made less biased.
So outside of the science, another critical question our society needs to answer is, “How do we define fairness?” Most of us can agree it’s not the current state, but we must also decide what fair outcomes look like and how procedures in and around the AI should be changed accordingly.
For speech recognition, high-quality datasets that represent all users are key for producing equitable outcomes. Training on voices from different backgrounds prepares the AI to perform comparably across demographic groups. Characteristics like race, gender, age, where you grew up, ethnicity, accent, atypical speech—all of that makes a difference in the way people speak.
AppTek’s AI Bias Correction Datasets, or ABCD’s, methodology is designed specifically to enrich AI models with demographically diverse acoustic features. AppTek assesses representation of gender, accent/dialect, age, race, education, and region for each project to plan the right demographic mix. Using AppTek’s 4D (Language/Dialect, Demographics, Domain, and Channel) for HLT approach, we cut this mix across different domains and channels so that the model learns from every type of situation.
Further, we manage large-scale projects to collect voices from a specific accent or region, utilizing our diverse, global, distributed workforce and investing in targeted campaigns for characteristics that are rarer or harder to find. AppTek’s scientists also test the models on a diversity of voices and continuously refine them to improve the quality of the outcome. For example, our data teams drive collections and test sets for dozens of accents in English. These include regional US dialects, UK, Irish, Welsh, Scottish, Australian, South African, Chinese, Indian, and more, which respectively break down into regional variations like Boston vs Philadelphia vs New York. We compare the Word Error Rate (WER) for each accent to the general WER, and if it’s not up to par, we know we need to include more voices with that particular accent. With respect to gender, a recent evaluation of the model in a small English news test set resulted in comparable WER’s for female (6.3%) and male (6.7%) speakers.
Speech recognition is trained with acoustic models and language models. Acoustic models capture the voices of individuals, and language models absorb lexicon and sentence structure. We train both models on a variety of diverse data sets that encompass a wide range of domains, demographics, dialects, etc.
Machine translation is trained using parallel corpora of source texts and their translations. One of the most common bias issues in MT is related to gender. For example, in Czech, the word “doctor” has both male and female forms, and a simple phrase such as “I’m happy” is translated into Greek differently depending on the gender of the speaker.
Some MT providers overused male pronouns in translations, even when the text referred to a woman. The output also skewed masculine for words like “strong” or “doctor” and feminine for words like “nurse” or “beautiful.” The traditional workaround is to produce two translations, one female and one male, for the user to select from. AppTek’s MT solution contains capabilities to remove the opportunity for bias and more accurately identify the gender: the system uses the extended context of the document to provide the actual gender of the speaker or subject, and the model can also be fed with speaker gender metadata.
I joined AppTek because I am passionate about crafting a fairer, more connected world, and my belief that HLT is key to unlocking a better future for all has only grown. Bias might be hard to remove from algorithms, but it’s even harder to remove from humans. If we can detect bias, define how to fix it, and implement those fixes in a machine learning model, we can improve the decision that is being made at the end of the day. At AppTek, we frame our methodologies around expanding access to everyone. I believe that if we work together to harness AI fairly and effectively, we can build a society where everyone, regardless of language, ability, gender, race, etc., can benefit equitably.
AppTek is a global leader in artificial intelligence (AI) and machine learning (ML) technologies for automatic speech recognition (ASR), neural machine translation (NMT), natural language processing/understanding (NLP/U) and text-to-speech (TTS) technologies. The AppTek platform delivers industry-leading solutions for organizations across a breadth of global markets such as media and entertainment, call centers, government, enterprise business, and more. Built by scientists and research engineers who are recognized among the best in the world, AppTek’s solutions cover a wide array of languages/ dialects, channels, domains and demographics.