Despite the growing interconnection that humans are experiencing, language misunderstandings are still numerous. Even if crucial economic and political meetings are flocked by interpreters, or if most touristic zones are set to English, other environments are still plagued by language barriers. Medical cabinets, for example are sometimes places where misunderstandings can lead to tragic losses. Likewise, interactions between tourists and city dwellers can be lessened due to difficulties in finding a common language. However, over the last few years many technology-based solutions have emerged that would allow understanding between different languages. But are we still lightyears from creating Star Trek’s universal translator?

The Assistant

On the 8th of January Google announced that Google Assistant , the American company’s vocal assistant, would soon be able to act as an interpreter. The “interpreter mode” function should be able to simultaneously translate 27 languages, including: English, French, Spanish, but also, Thai, Romanian, and Slovakian.

Concretely speaking, this should allow two people who don’t speak the same language to use Google Assistant to communicate. All they would need to do is take turns speaking and Google Assistant, after automatically detecting which languages are spoken, would transcribe each language to the other in a spoken and written manner.

In the coming months, Wired reports that this function should be made available to select hotels in San Francisco, New York, and Las Vegas. It will then be made available on Google Home Hub and Speakers. This technology is not immediately available in a nomadic sense. In other words, a tourist or businessman travelling abroad can only hope to take advantage of this technology once they are in a place equipped with Google Assistant.

This function seems promising and could notably win over tourism and hotel sectors if it is perfected. The American site The Verge sent a team to a hotel to test the Mandarin to English skills of the interpreter. The video that they made show the limits of its functions, which sometimes detects the wrong language and guesses too much. The delay time between each sentence is quite long, which hinders smooth conversation.

However Wired does remind us that this function is to be used more as a tool for the service industry rather than an assistant for conversations between several individuals.

In the earpiece

But if one day Google and its Assistant could make travelers and tourists’ lives easier, there are other solutions which could facilitate conversation between foreigners. In fact, many companies have already developed earpieces or earphones equipped with an quasi-instantaneous translation function.

According to Les Echos, Three companies dominate this new market: Google of course, with its Pixel Buds earphones; Waverly Labs, creator of Pilot earphones; and Mymanu who designed the Clik earphones. All these solutions require a smartphone and an internet connection because they use apps that link up with their respective earphones. A major constraint, according to The Economist journalist Leo Mirani, is that each person in the conversation needs to own the earphones and a smartphone to communicate.

Science et Avenirs explains how this “Sci-Fi fantasy” operates, which for now only works within the framework of a two person conversation. Firstly the earphones, who have a built in microphone, sense the voice of the speaker. Thanks to the use of AI, notably in voice recognition and language treatment fields, they would then transmit speech to a translation app (Google Translate for Pixel smartphones, or another specific system for other earphones.)

The translation is then transformed to an audio file which is transmitted by the earphones. This process generally takes a few seconds. A few errors are to be expected (notably with words that carry more than one meaning), but these tools are constantly being worked on and improved. Indeed, they use deep learning to progressively improve the reliability of translations, thanks also to user feedback.

Sign-io

However, language barriers continue to hurt a part of the population for whom the technologies developed by Google, Waverly Labs, and Myanmu cannot rescue; those who are deaf or hard of hearing. They indeed use sign language, which is a non-spoken language. Sign language is non-universal, far from it, and even though it is France’s most common handicap at birth, it is rarely spoken. Besides worldwide workshops on sign language, how should we remedy an issue that excludes deaf people and those with bad hearing?

Roy Allela, a 25 year old Kenyan, offered an innovative answer: smart gloves that transform sign language into a spoken language. The Kenyan site Pulse explains how this technology works: via sensors placed around the fingers, the Sign-io gloves monitor digit position, which allows them to recognise the signs that the speaker is using.

These signs are then transmitted via bluetooth by an app that can be downloaded on any phone and who in turn, translates this language to speech. The translation has a success rate in 93% of cases, according to NairobiNews’ website, which explains that Roy Allela created this tool for his six-year old niece. Born deaf, she found it hard to communicate with those around her. University of Washington students had already developed similar technology in 2016, which used a computer to translate American sign language to spoken English.

Instantaneous

Therefore, more and more tools are able to facilitate conversations between people that don’t speak the same language, be it orally or not. But there equally exists high-performance technology for text translation. We are no longer speaking of instantaneous oral translation but instead of written text translation similar to the services that Google Translate offers. Beyond a dictionary function, many services now propose an instant translation for sentences and online texts – Reverso, Wordreference, Google Translate… This functions are also integrated in some social platforms like Facebook.

Long criticised for their mistakes, these services are nonetheless becoming more and more reliable. For example, Facebook has been using AI since 2017 to achieve instant translations. Beforehand, the social network used a “phrase-based machine translation” system, described by Le Figaro as a system that translated bits of sentences as individual segments independently from other parts of the same text. This in turn lead to misunderstood texts which sometimes conveyed the wrong meaning.

Today, The Verge explains that Facebook and other translation tools are based on artificial neuron networks, an advanced form of AI. These networks consider the sentence in full before offering a translation which also varies depending on context. To do this, the translation tool bases itself on an aspect of machine-learning called long short-term memory. It allows the incorporation of words to a dictionary by apprenticeship (deep-learning), to then reuse them in the context of a sentence.

Towards a universal translator?

Many remain sceptical when it comes to the use of these independent tools to translate technical or literary documents. Specific words or turn of phrases can indeed lead to the redaction of legal or technical documents, for example. In Les Echos, professional translator Philippe Servini, estimates that in his domain, the machine cannot exceed Man because “translation is not an exact science”. But a new invention is reshuffling the cards: an AI created by French company Quantmetry has translated an 800 page English book to French, all in the space of twelve hours. This work, if it had been done by a professional translator, would have taken almost a year and would have cost close to 150,000 euros, Alexandre Stora, one of the projects’ leaders explained to Futura Tech.

This prowess was made possible thanks to the company’s alliance with DeepL, considered to be one of the best translation tools available. Quantmetry also configured specific tools which allowed them to translate graphs and scientific papers as well as an additional dictionary which held the translation for 200 technical words. According to 20 minutes, this vocabulary base was then integrated to an AI on which many deep-learning specialists worked.

Even after this procedure, the text was edited – for the major part- by humans. Jérémy Harroch, Quantmetry’s CEO, also took care to mention to France Info that “it was not possible to obtain an automatic translation from the text.”

This technology is, as of now, on the cusp of abolishing language barriers. Many tools that for the most part use AI are capable of translating texts just as easily as if it were speech, and this with increasing efficiency. Though we still have a long way toward making the universal translator imagined by Murray Leinster in 1945, we can be sure he would have been quite surprised that we would be halfway to doing so before the 22nd century.   

Author : Côme Allard de Grandmaison