You are on page 1of 4

Automatic language translation has come a long way, thanks to neural networks—computer

algorithms that take inspiration from the human brain. But training such networks requires an
enormous amount of data: millions of sentence-by-sentence translations to demonstrate how a
human would do it. Now, two new papers show that neural networks can learn to translate
with no parallel texts—a surprising advance that could make documents in many languages
more accessible.

“Imagine that you give one person lots of Chinese books and lots of Arabic books—none of
them overlapping—and the person has to learn to translate Chinese to Arabic. That seems
impossible, right?” says the first author of one study, Mikel Artetxe, a computer scientist at
the University of the Basque Country (UPV) in San Sebastiàn, Spain. “But we show that a
computer can do that.”

Most machine learning—in which neural networks and other computer algorithms learn from
experience—is “supervised.” A computer makes a guess, receives the right answer, and
adjusts its process accordingly. That works well when teaching a computer to translate
between, say, English and French, because many documents exist in both languages. It
doesn’t work so well for rare languages, or for popular ones without many parallel texts.

The two new papers, both of which have been submitted to next year’s International
Conference on Learning Representations but have not been peer reviewed, focus on another
method: unsupervised machine learning. To start, each constructs bilingual dictionaries
without the aid of a human teacher telling them when their guesses are right. That’s possible
because languages have strong similarities in the ways words cluster around one another. The
words for table and chair, for example, are frequently used together in all languages. So if a
computer maps out these co-occurrences like a giant road atlas with words for cities, the
maps for different languages will resemble each other, just with different names. A computer
can then figure out the best way to overlay one atlas on another. Voilà! You have a bilingual
dictionary.

The new papers, which use remarkably similar methods, can also translate at the sentence
level. They both use two training strategies, called back translation and denoising. In back
translation, a sentence in one language is roughly translated into the other, then translated
back into the original language. If the back-translated sentence is not identical to the original,
the neural networks are adjusted so that next time they’ll be closer. Denoising is similar to
back translation, but instead of going from one language to another and back, it adds noise to
a sentence (by rearranging or removing words) and tries to translate that back into the
original. Together, these methods teach the networks the deeper structure of language.

There are slight differences between the techniques. The UPV system back translates more
frequently during training. The other system, created by Facebook computer scientist
Guillaume Lample, based in Paris, and collaborators, adds an extra step during translation.
Both systems encode a sentence from one language into a more abstract representation before
decoding it into the other language, but the Facebook system verifies that the intermediate
“language” is truly abstract. Artetxe and Lample both say they could improve their results by
applying techniques from the other’s paper.

In the only directly comparable results between the two papers—translating between English
and French text drawn from the same set of about 30 million sentences—both achieved a
bilingual evaluation understudy score (used to measure the accuracy of translations) of
about 15 in both directions. That’s not as high as Google Translate, a supervised method that
scores about 40, or humans, who can score more than 50, but it’s better than word-for-word
translation. The authors say the systems could easily be improved by becoming
semisupervised–having a few thousand parallel sentences added to their training.

In addition to translating between languages without many parallel texts, both Artetxe and
Lample say their systems could help with common pairings like English and French if the
parallel texts are all the same kind, like newspaper reporting, but you want to translate into a
new domain, like street slang or medical jargon. But, “This is in infancy,” Artetxe’s co-author
Eneko Agirre cautions. “We just opened a new research avenue, so we don’t know where it’s
heading.”

“It’s a shock that the computer could learn to translate even without human supervision,”
says Di He, a computer scientist at Microsoft in Beijing whose work influenced both
papers. Artetxe says the fact that his method and Lample’s—uploaded to arXiv within a day
of each other—are so similar is surprising. “But at the same time, it’s great. It means the
approach is really in the right direction.”

More people learn English through technology than by any other means. Out of 1.5 billion
English language learners across the globe, only a fraction have the resources or access to
learn the language through formal teaching. Just as the global reach of English has been
accelerated by online services, so has its effect on learning. Most of this is informal learning,
which in practice is how most of us learn most things.

The explosive access by young people to YouTube, Vimeo, Netflix, Amazon Prime, Google,
Wikipedia, social media and an endless array of other services, has given them unprecedented
access to English content. Not the dry, didactic content of the course and classroom, but
content they crave and find compelling – movies, TV, music, sport, news, clips … This
informal acquisition of language is the new norm.

AI is the new User Interface

A new kid on the online block, that promises to revolutionise the online teaching of English,
is Artificial Intelligence (AI). Advances in Natural Language Processing (NLP) mean that
learners can have a frictionless interface with language content through voice. Amazon Alexa
and Google Home are consumer devices that you can speak to and that speak back. In
language learning I have switched my Amazon Alexa to respond in German. There are
several leaps here: first, that she understands what I say (interesting and useful that one has to
pronounce phonetically to be understood) and secondly, that she will respond in German. So
natural, dialogue-driven interfaces are now here in our homes.

We can expect a lot more of this speech-driven, consumer device, language learning. Virtual
Reality (VR) and Augmented Reality (AR) will also deliver the democratisation of
experience – the ability to experience travel, immersion and dialogue with others in a
multiplayer environment – allowing us to get powerful, immersive language learning.

Personalised learning

Beyond this, AI offers personalised learning. It knows who you are and can track your
progress, as well as adapt delivery to your needs. Like a satellite navigation system in your
car, it can use aggregated data from many learners, combined with data from your own
learning journey, to deliver exactly what you need at the moment you need it.

One of the first massively adopted, adaptive, online language learning services was Duolingo.
Many educational experts question the quality of the learning but an estimated 30 million
users are currently trying it – and here's the punch – it's free. If this is what the first, scaled,
consumer service can achieve, imagine what is yet to come.

Chatbots

Chatbots, interfaces that allow you to talk to an application online via text or speech, are
another godsend in language learning, as they bring dialogue to teaching. Duolingo has
dabbled with chatbots and is likely to find that they will bring the scalable, personalised
dialogue and immersion that language learning requires. We’ve already seen a chatbot
anonymously replacing a teacher at Georgia Tech, and being put up for teaching awards by
learners. Chatbots bring naturalistic learning, engagement and personalised dialogue.

Scaling up

One word really matters here – scale. AI is many things and can be used in many ways for
improving learning interfaces: creation of learning content, curation of content, control of
feedback (adaptive learning), dialogue, immersion, student engagement and assessment. In
the same way that the translation of languages has been revolutionised by AI, so will its
teaching and learning. AI loves scale, as scaled use, and data, is what allows it to scale
quality. The more you use it the better it gets. The demand for English language learning way
outstrips supply. The same force that has helped increase the scale of the thirst for English
skills will deliver the means of easy and cheap learning – online AI.

Way out there

Elon Musk (CEO of Tesla) and Mark Zuckerberg (CEO of Facebook) have invested in the
frictionless interfaces of tomorrow. Musk’s Neuralink wants to interface directly with the
brain through a ‘neural lace’ and Zuckerberg through mind reading (optically via lasers). We
can already read minds (what words you’re thinking) through scanners, but these are huge
and cost tens of millions of dollars. Zuckerberg wants to tap into the part of the brain that
results in speech to allow you to think words that will then be typed. Musk is far more
ambitious in that he wants to extend cognition. He argues that this has already happened in
the sense that we have cognitive extension through the pocket-size, personal and powerful
technology that is the smartphone. His aim is to allow us to acquire a new skill or language
with little effort.

Conclusion

Agriculture was mechanised and we moved from the fields to factories, when robots
mechanised the factories, we moved to offices but as these jobs are being mechanised by AI
we have nowhere else to go. AI can create as it destroys, however. We may be able to learn
faster and more efficiently at scale, for the one non-scalable component in learning is the
warm-bodied, human teacher. Technology has already provided the media and context for
language learning without teachers. AI is also likely to provide teaching technology that is
always on and finely tuned to the needs of every individual learner on a massive and
unlimited scale. Technology may, at some point, even obviate the need to ‘learn’ a new
language. It may be simply and quickly acquired as a skill. Resistance, as they say, is futile.

You might also like