Unveiling Big Breakthroughs at NAB 2019
We’re very excited for next week’s NAB Showin Las Vegas. AppTek will be showcasing some ground-breaking advances that will bring major benefits to the media and entertainment community.
New Levels of Accuracy in Live Automated Closed Captioning
By incorporating breakthrough advances in live-streaming Automatic Speech Recognition, and leveraging over thirty years’ worth of speech recognition experience, our live closed captioning applianceachieved unmatched results in an independent industry evaluation that looked at multiple factors in an overall quality metric. Those included accuracy, punctuation, diarization and word omissions. Our solution ranked the highest of multiple automated systems tested,in some cases even surpassing the output of human captioners.
It’s recently been adopted by CKSA-DTtelevision of Lloydminster, Canada, who chose us for overall quality, speed and exceptional performance in accuracy, lower error rates and lower word omissions. We achieved this result by leveraging our unified neural network platform to apply features for noise adaptation, vocabulary normalization, punctuation prediction and speaker separation. These enable functionally readable and understandable captions.
Beyond out-of-the-box performance, our solution will help broadcasters with real-time closed captioning by using a neural network architecture for continuous learning and improvement. Highly accurate closed captioning can now be available any time of day or night, should it be needed outside of normal news broadcast hours. And, using the AppTek platform enables the media and entertainment industry to achieve exponential cost benefits, which matter a lot in the current budget environment.
Subtitling Neural Machine Translation (NMT) for Increased Accuracy, Performance and Workflow Optimization
We’ll also be revealing significant advances in subtitling, showcasing domain customization, improved subtitle segmentation and document level translation capabilities that use our Neural Machine Translation (NMT) technology.
We’ve developed a model that overcomes subtitling segmentation challenges that until now have impeded broadcasters’ and content owners’ widespread adoption of automation to meet their growing subtitling requirements. Our model unifies neural network boundary predictions and handcrafted rules, so that subtitle segmentation performance, typically based solely on speaker pauses, can now be based on semantic units. We maintain these units largely intact in the same subtitle unit, thus reducing the post-editing effort required to bring ASR and MT outputs to the publishable quality level that media and entertainment clients require.
With the growth of OOT platforms, more audiovisual content than ever before is distributed around the globe. This requires new infrastructure capable of increasing subtitler productivity and improving project turnaround times in the localization workflow. As a result, we’re engaging in further cutting-edge machine translation research. Our scientists have developed the first commercial application of document-level translation by utilizing context from previous subtitles to generate the machine translation for a given sentence in a subtitle file. MT systems that translate individual sentences have been known to make errors in word agreement and gender, which can be resolved automatically only with the longer context.
Both the live closed captioning and subtitling advances represent big leaps forward for the media and entertainment industry. We’re thrilled to be leading the development of this ground-breaking technology, and can wait to show it to you. Come visit us at the NAB Show 2019in Las Vegas, BOOTH: South Hall SL14917.