Company Overview

Home / Company Overview

About AppTek

AppTek has always been in the business of connecting people regardless of language. We are pioneers and leaders in automatic speech recognition (ASR), machine translation (MT), machine learning (ML), natural language understanding (NLU) and artificial intelligence (AI).  Founded in 1990, AppTek employs one of the most agile, talented teams of ASR, MT, and NLU PhD scientists and research engineers in the world.  Through our advanced research in speech recognition, machine translation and artificial intelligence, we have solved many challenging problems improving human-quality transcription, language understanding and translation accuracy.  Our people are among the language technology and machine learning industry’s premier experts. Our long-standing affiliations with the world’s leading human language technology universities is central to our continuous introduction of new theories and solutions for automating recognition, translation and communication. Our 30-year history of achieving performance goals with our customers across government, global commerce, call centers and media comes from our understanding of their problems and the best application of technology solutions.

Company History and Timeline

1990
Company founded to enable global multilingual commerce applications and communications
2001
Began providing critical support to USG with advanced text analytics and machine translation software
2002
Patent filed for Automatic Speech Recognition method
2009
US Government awards AppTek first multi-speaker, multilingual interview indexing and analysis program (Talk2Me)
2015
Patent for Keyword Speech Recognition
2014
eBay acquires full rights to AppTek's Hybrid Machine Translation Platform for cross-border trade
2015
Launch of new AI platform for ASR and NMT
2016
Two Patents for Deep Neural Network Model Advancements
2018
Patent for Audio Recognition of Keywords
2019
AppTek Wins Two 2019 SpeechTEK People’s Choice Awards
2019
Hermann Ney, Science Director, granted IEEE's James L Flanagan award for pioneering life-long advancements in speech technology

Recent Academic Research and Publications

LSTM Language Models for LVCSR in First-Pass Decoding and Lattice-Rescoring

July 2019
Eugen Beck | Wei Zhou | Ralf Schlüter | Hermann Ney

LSTM based language models are an important part of modern LVCSR systems as they significantly improve performance over traditional backoff language models. Incorporating them efficiently into decoding has been notoriously difficult. In this paper we present an approach based on a combination of one-pass decoding and lattice rescoring. We perform d...

View Research

Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies

July 2019
Y. Kim, Y. Gao, and H. Ney

Transfer learning or multilingual model is essential for low-resource neural machine translation (NMT), but the applicability is limited to cognate languages by sharing their vocabularies. This paper shows effective techniques to transfer a pre-trained NMT model to a new, unrelated language without shared vocabularies. We relieve the vocabulary mismatch by using cross-lingual word embedding, train a more language-agnostic encoder by injecting artificial noises, and generate synthetic data easily from the pre-training data without back-translation.....

View Research

Learning Bilingual Sentence Embeddings via Autoencoding and Computing Similarities with a Multilayer Perceptron

June 2019
Yunsu Kim | Hendrik Rosendahl | Nick Rossenbach | Hermann Ney

We propose a novel model architecture and training algorithm to learn bilingual sentence embeddings from a combination of parallel and monolingual data. Our method connects autoencoding and neural machine translation to force the source and target sentence embeddings to share the same space without the help of a pivot language or an additional transformation....

View Research

Language Modeling with Deep Transformers

May 2019
Kazuki Irie | Albert Zeyer | Ralf Schlüter | Hermann Ney

We explore multi-layer autoregressive Transformer models in language modeling for speech recognition. We focus on two aspects. First, we revisit Transformer model configurations specifically for language modeling. We show that well configured Transformer models outperform our baseline models based on the shallow stack of LSTM recurrent neural network layers....

View Research

Analysis of Deep Clustering as Preprocessing for Automatic Speech Recognition of Sparsely Overlapping Speech

May 2019
Tobias Menne, Ralf Schlüter, Hermann Ney:

Significant performance degradation of automatic speech recognition (ASR) systems is observed when the audio signal contains cross-talk. One of the recently proposed approaches to solve the problem of multi-speaker ASR is the deep clustering (DPCL) approach. Combining DPCL with a state-of-the-art hybrid acoustic model, we obtain a word...

View Research

Weakly Supervised Learning with Multi-Stream CNN-LSTM-HMMs to Discover Sequential Parallelism in Sign Language Videos

April 2019
Oscar Koller | Necati Cihan Camgoz | Hermann Ney | Richard Bowden

In this work we present a new approach to the field of weakly supervised learning in the video domain. Our method is relevant to sequence learning problems which can be split up into sub-problems that occur in parallel. Here, we experiment with sign language data. The approach exploits sequence constraints within each independent stream and combines them ....

View Research
View More Academic Research
30-Year Leaders in Speech Technology
Find us on Social Media:
ABOUT APPTEK

AppTek provides an artificial intelligence and machine learning-based automatic speech recognition, machine translation and natural language understanding platform for organizations in a variety of markets, such as media and entertainment, call centers, government, enterprise business and others across the globe. Available via the cloud or on-premise, AppTek delivers the highest quality real-time streaming and batch speech technology solutions in the industry.   Featuring scientists and research engineers who are recognized amongst the best and most experienced in the world, the company’s solutions cover a wide array of languages, dialects, and channels.

SEARCH APPTEK.COM
Copyright 2019 AppTek    |    Privacy Policy      |       Terms of Use