Spotlight on MT: An interview with Dr. Evgeny Matusov

November 22, 2021

By Dr. Yota Georgakopoulou

With the use of machine translation (MT) now widespread in the localization industry and making headway in the creative industries as well, the question as to whether machines will eventually replace translators is one that has probably concerned every single translator at some point. Being a translator myself, this topic is very close to my heart, so I jumped at the opportunity to interview AppTek’s Lead Science Architect for MT, Dr. Evgeny Matusov, to find out more about the current state of MT, the exciting new developments underway and what he thinks about the future of translation and translators.


Q: Tell me a bit about yourself – where did you grow up, what languages do you speak?

A: I grew up in Moscow, so I speak Russian, but my family moved to Germany after I finished school and that’s where I learned German. I also spent the last year of high school in the USA as part of a program by the US Congress called “Freedom Support Act” which provided funding to students that had passed comprehensive exams in English. Approximately 2,000 to 3,000 students benefited from this program every year and were hosted by volunteer families spread throughout the USA.

Dr Evgeny Matusov

I applied for this program to improve my English by practicing it with native speakers and to have the opportunity to get acquainted with a different culture and way of life and meet new people. It was a great experience and very different to my life in Moscow, as I was hosted by a family that lived in a very rural area in Kentucky. The experience at school was also very different: on the one hand I found all the science classes very easy compared to what I was used to, but I noticed a big contrast once I joined the academic team. Students in it were highly competent in a lot of subjects despite studying in a public school – I was fascinated by their motivation for self-study and self-improvement.

Q: It was not easy for me to decide what to study when I was at high school. Did you always want to study computer science?

A: While at school I was torn between humanities and sciences, as I always loved language – I even tried my hand at writing poems since the age of ten. During my school years, it was the time when computers came into our lives, and I was fascinated by them, plus this was a new, developing area with better employment opportunities. I had already heard while at high school about the first MT products, which were rule-based back then, so I thought that perhaps I could combine language with computer science. I tried some of them out and saw immediately that their performance was not very good, but it seemed to me like an interesting problem to work on.

Nonetheless, I did not imagine at the time that this is what I would end up doing professionally, nor was I aware of this specialization when I was choosing universities in Germany. I applied for the computer science program at RWTH Aachen because of the university’s reputation and it was only after I got there that I discovered about Prof. Dr.-Ing. Hermann Ney’s chair in human language technologies. This piqued my interest, so during my master’s degree I focused on natural language processing and decided to stay on for a PhD in machine translation with him.

Q: What are your interests and hobbies?

A: Translation has been a hobby of mine ever since I was a teenager. Especially after I came back from the USA, I tried translating Russian rock songs into English in a way you could sing them, with the same rhyme and rhythm. Over the years I even compiled a short book of my translations of songs and poems by popular Russian poets into English, approximately forty different poems from the classics of the 19th century to modern poetry.

This is of course very different to what I do today with machine translation. Computers are very far from translating such content, although there have been attempts and some papers published on the MT of poetry, but I find it is a completely different art to translate poems.

Q: This must mean you can relate to the issues translators complain about with respect to MT and the translation of humor, cultural references, etc. You say that computers are very far from translating poetry, but can you see it happening at some point in time?

A: This is a question people ask all the time: “Will machines replace translators at some point?” The answer to it is related to the more general question about the so-called ‘singularity’. “Will computers reach the same level of intelligence as humans and is the singularity near?” I am personally skeptical about it, and I don’t see it happening in the observable future – during my lifetime. Having said this, the technology improves all the time, and it can indeed be the case that for some type of content the quality of MT will become so good that it will not be easy to tell if a text has been translated by a human or a machine. But I think there will still exist plenty of other types of content, more creative content, where this will not be the case.

Q: So, you don’t believe in singularity?

A: I don’t believe in an abrupt switch where from one day to the next artificial intelligence overtakes human intelligence. Perhaps it will happen over a longer period of tens or even hundreds of years and we will not even notice it, but I don’t believe there is a single point in time when it will happen overnight.

Q: It sounds like older translators don’t need to worry about retiring out of their jobs then, but should younger translators, the ones in their 20s, worry about having a job in the future?

A: MT is a technology and, as with all technologies since the industrial revolution, repetitive, simple tasks are gradually taken over by machines and people need to adjust to that, while the more creative tasks, the ones that require most intelligence will be around for a while. In the case of translation, this means that texts like software manuals, which use standard language, will be translated fully by machines very soon. A good example is the UN documentation. It contains a very large vocabulary and a lot of different topics, but the language is very factual, not figurative at all. For such texts we expect that in the next decade or so, for at least the main languages, there will be no observable difference in quality if you have them translated by a translator or by a machine.

Q: In your career you’ve had jobs both in the industry and on the research side. Why is research in MT important to you?

A: Research has always been part of my employment choices. I’ve been working in applied research, where there isn’t such a big leap from using the models, the algorithms that we develop in the lab, to applications in real life. Even while at university, we built prototypes that were directly available for use by the public or the project recipients. It is the same with my work in the industry – it is still applied research. We build models which are fast enough and can work in production.

Research in MT is important to me because it helps people overcome language barriers and can make new information sources available when they otherwise wouldn’t be. We see this all the time, even with YouTube videos and machine-translated subtitles. One cannot expect that all content that exists which is of interest to some people will be translated or subtitled professionally. People’s interests have become very specialized and there can be a lot more content of a certain type in some languages than in others, so this can be made available to interested parties with the help of MT. This motivates the scientists, people like me, to work on MT.

Machine translation is also a very hard problem, so it is quite challenging to come up with new ideas and new algorithms to solve it. It simply is a very interesting, mind-boggling thing to work on.

Q: Was there a turning point in your life with respect to your development in the MT field?

A: There were two defining points in my life, which were exactly a decade apart. The first was in 2005, during the second year of my PhD, when a lot of new ideas came to my mind, which resulted in multiple publications. At the same time, I started working on projects like TC-Star, where it was possible to realize some of these ideas, like the combination of different MT systems from different companies. This was an approach that was already working quite well in ASR by that point, but in MT it was problematic because of all the reordering issues. Offering a solution to this problem ended up forming the core of my PhD thesis and I received a lot of citations as a result. That was a breakthrough year for me, after which I felt very confident that I was in the right place, doing the right thing.

Ten years later, in 2015, when neural MT emerged, I had to ask myself again if I was in the right place. We had all learned about neural networks at university but training them required very large computational resources that nobody had access to until then. Not only did we have to learn new concepts, but the neural approach also requires a complete mindset shift in terms of how things work and what we should do for the neural system to perform better. Everything we were doing in the statistical phrase-based world was completely different. Even things like how to adapt a system, which seem very straightforward now, seemed to be hard problems to solve in the neural world. The approach ‘start with the trained model and continue training it' was not possible with phrase-based systems. In the past we had a model, then we trained an adapted model and then we tried to mix them on the basis of weights and interpolations. In the neural world, the model continues to learn with new data, like a human, and although it is simple really, people did not realize it until it was actually done. There were other things like that. To manage this change and at the same time continue to support the old phrase-based systems that were still in production, while also thinking about the future was a challenge and a life-turning moment for me.

Q: It sounds like scientists had to reinvent themselves in this neural paradigm, almost like how translators need to reinvent themselves with this new piece of technology that has entered their working lives.

A: Yes, it was a reinvention indeed. Now I’m looking forward to 2025 to see what happens then!


The next part of our ‘Spotlight on MT’ article will delve deeper into recent advances in MT technology and what translators in the creative industries can expect from it in the short term.

AI and ML Technologies to Bridge the Language Gap
Find us on Social Media:
ABOUT APPTEK

AppTek is a global leader in artificial intelligence (AI) and machine learning (ML) technologies for automatic speech recognition (ASR), neural machine translation (NMT), natural language processing/understanding (NLP/U) and text-to-speech (TTS) technologies. The AppTek platform delivers industry-leading solutions for organizations across a breadth of global markets such as media and entertainment, call centers, government, enterprise business, and more. Built by scientists and research engineers who are recognized among the best in the world, AppTek’s solutions cover a wide array of languages/ dialects, channels, domains and demographics.

SEARCH APPTEK.COM
Copyright 2021 AppTek    |    Privacy Policy      |       Terms of Service     |      Cookie Policy