History of Machine Translation

Last updated Jan 22, 2024
By Gwenydd Jones

From rule-based to neural machine translation. This article will tell you about the development of machine translation technology, from the 1950s to the present day.

Machine translation is the automated translation of a source-language text into a target-language text. Human translators may be involved at pre-editing or post-editing stages, i.e. at the beginning or the end, but they are not typically involved in the translation process.

Although concepts of machine translation can be traced back to the seventeenth century, it was in the 1950s when US-government-funded research stimulated international interest in the investigation and production of machine translation systems.

The original intention was to produce a fully automatic high quality machine translation system (FAHQMT) but by 1952 it was “already clear that objectives of fully automated systems were unrealistic and that human intervention would be essential” (Hutchins, 2006, p. 376). Many researchers were scientists rather than linguists and unconscious of the need for real world knowledge in the translation process. Many complex elements of language could not be easily programmed into a computer, e.g. understanding homonyms or metaphors.

The first public demonstration of an automated translation system, which translated 250 words between Russian and English, was held in the US in 1954. It employed the direct translation approach. This first generation architecture is dictionary based and attempts to match the source language to the target language word for word, i.e. translating directly. “This approach was simple and cheap but the output results were poor and mimic…the syntactic structures of the source language” (Quah, 2006, p.70). It was therefore more suited to source language/target language pairs that were structurally related. Despite poor translation quality, the project was well received and stimulated further research funding in the US and Soviet Union.

Second generation machine translation systems

By the mid-1960s research groups existed in many countries. The direct translation approach was still used and new research into rule-based approaches: transfer and interlingua, saw the beginnings of second generation machine translation systems. In 1964 the US government commissioned a report on the progression of machine translation research by the Automatic Language Processing Advisory Commitee (ALPAC). The ALPAC report highlighted the slowness, lack of accuracy and costliness of machine translation compared to human translators and predicted a bleak future for machine translation development. Most US funding ceased and worldwide machine translation research declined.

While automated translation systems had proven unsuitable for replacing human translators on a general level, it was observed that they were quite accurate when the language input was limited or very specific. Projects within specific language domains, such as the Météo system developed in Canada in 1976 to translate weather forecasts between French and English, were successful.

By the late 1970s, research into the second generation interlingua translation approach declined. This approach analyses the source text and changes it into a special “interlingual” language, the target text is then generated from this intermediary form. The problem was an inability to create “a truly language-neutral representation that represents ‘all’ possible aspects of syntax and semantics for ‘all’ known languages” (Quah, 2006, p.73). This task remains unaccomplished and interlingua systems are only available as prototypes.

Transfer approach to machine translation

In the late 1970s and early 1980s, research focussed more on the transfer approach. In this architecture, the source text is analysed by a source language dictionary and converted into an abstract form. This form is translated to an abstract form of the target text via a bilingual dictionary and then converted into a target text using a target language dictionary. This rule-based approach was less complicated than interlingua and more suited to working with multiple languages than direct translation. Problems arose where dictionaries contained insufficient knowledge to deal with ambiguities. Uses included online translation and the Japanese IT translation market.

Programming and updating dictionaries for machine translation is a time-consuming and expensive process. They need to contain huge amounts of information to deal with issues such as lexical ambiguity, complex syntactic structures, idiomatic language and anaphora across numerous languages. Austermül (2001, p.173) highlights how “world knowledge is particularly difficult to implement in machine translation systems”; a computer cannot make the same knowledge based decisions as humans can. If a dictionary is too small, it will have insufficient information, if it is too large, the computer will have less chance of selecting the correct translation option.

Rise of statistical machine translation

In the 1990s, research led to a third generation of machine translation system: corpus-based architectures, namely the statistical and example based approaches. The statistical approach breaks the source text down into segments then compares them to an aligned bilingual corpus, using statistical evidence and distortion probabilities to choose the most appropriate translation. The example based approach imitates combinations of examples of pre-translated data in its database. For this approach to be successful, the database must contain close matches to the source text. This approach forms the basis of translation memory tools.

Effectiveness of machine translation

All of the machine translation architectures are found to work best on technical texts with a limited or repetitive vocabulary. Gross (1992, p.103) demonstrates how general translations requiring real world knowledge suit human translators better, while mathematical and abstract concepts are more suited to machine translation systems. Human translators lack the speed and terminological consistency of machine translation, and can get bored with repetitions and technical language.

The 1980s saw a huge movement towards the use of controlled language, still key in successful machine translation today. In the pre-editing stage, a writer simplifies the source text according to specific rules to make it easier for the computer to translate it. The translation process is then quickly carried out by the machine. Next, a human translator post edits the document to publishable quality. The European Commission (which has been researching and using machine translation since the 1960s) has found that “as long as translation can be restricted in subject matter or by document type…improvements in quality can be achieved ” (Hutchins, 2005).

Since the rise of international communications and growth of the localization industry, it has become clear that human translators are unable to meet the massive demand for cheap, fast (even instant), often large-scale information exchange across languages. Huge investments have been made in the development of machine translation systems for private and public use, primarily in mainstream languages. Hybrid systems combining rule- and corpus-based architectures have been introduced along with systems to improve accuracy by allowing human input at the translation stage.

The mass communication era has changed the importance companies place on having “full-dress” (Gross 1992, p.99) translations, since their goal is often simple information exchange. For instance, EU workers often need only an idea of the contents of a document to see if it is worth translating for publication, home users may be satisfied with free Internet based machine translation systems to get the gist of what a website says. When there is a requirement for assimilation of a text, such as an instruction manual for a technician, rather than dissemination of a text to produce a translation of publishable quality, machine translation has often proved to be a much faster and cost efficient solution than human translators.

Recent developments in machine translation

Recent developments in the area of machine translation have seen the incorporation of deep learning and neural networks to improve accuracy. Language Service Providers are now offering customised machine translation engines where, beyond incorporating terminology from a specific domain, such as life sciences, the travel industry or IT, the user can also upload their own translation data to try to improve the accuracy, style and quality of the machine translation output.

On 15 November, Google announced that they are putting neural machine translation into action in their Google Translate tool. They are rolling it out with with a total of eight language pairs to and from English combined with French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish.

Listen to my talk on machine translation available onProZ.com or comment below. Do you work with Machine Translation? What do you think of it?

Bibliography

Austermühl, F. (2001). Electronic Tools for Translators. Manchester: St Jerome.

Betts, R. (2005). Wycliffe Associates’ EasyEnglish. In: Communicator, Spring 2005.

Chesterman, A. (2004). Norms of the Future. In: Kemble, I. (Ed.) Translation Norms, what is ‘normal’ in the translation profession? Proceedings of the 2004 Portsmouth Translation Conference.

Dodd, C. (2005). Taming the English Language. In: Communicator, Spring 2005

Finderer, R. (2009). The Rise of the Machines. In ITI BULLETIN January-February 2009

Google. (2010). Google Translate Help. Retrieved January 23, 2010, from here

Gross, A. (1992) Limitations of Computers as Translation Tools. In Computers and Translation. Ed. Newton,J. London:Routledge.

Guerra, A.F. (2000). METEO system. In F. Fernández (Ed.) Machine Translation Capabilities and Limitations [Electronic Version] (p.72). Valencia: Artes Gráficas Soler. Retrieved January 12, 2010. URL no longer available.

Hutchins, J. (1998). The Origins of the Translator’s Workstation. Machine Translation, vol. 13(4), p.287-307. Retrieved January 7, 2010, from here

Hutchins, J. (2005). Current commercial machine translation systems and computer-based translation tools:system types and their uses. Retrieved January 7, 2010, from here

Hutchins, J. (2006). Machine Translation: History. In K. Brown (Ed.) Encyclopedia of Language and Linguistics [Electronic Version] (pp375-383). Oxford: Elsevier. Retrieved January 7, 2010, from here

Melby, A. (1992). The Translator Workstation. In Computers and Translation. Ed. Newton,J. London:Routledge.

Newton, J. (1992). The Perkins Experience. In Computers and Translation. Ed. Newton,J. London:Routledge.

Pym, P.J. (1990). Pre-editing and the use of simplified writing for machine translation: an engineer’s experience of operating and machine translation system. Translating and the Computer, vol. 10. Retrieved January 9, 2010 from here

Turovsky, B. (2016). Found in translation. More accurate, fluent sentences in Google Translate. From here

Quah, C.K. (2006) Translation and Technology, Basingstoke: Palgrave.

Van de Meer, J. (2009). Let a thousand machine translation systems bloom. Retrieved January 23, 2010 from here

Veritas, L.S (2009). Statistical Machine Translation and Example-based Machine Translation. Retrieved January 23, 2010, from here

Written by Gwenydd Jones

Gwenydd Jones is a Spanish- and French-to-English translator, an SEO blogger and a course creator. She is the founder of The Translator's Studio and the lead teacher on its courses. Connect with Gwenydd on LinkedIn or contact her through this website.

You may also like …

Which is the Best CAT Tool 2024?

Which is the Best CAT Tool 2024?

A freelance translator’s take on the best paid CAT tool and the best free CAT tool in 2024. Read before you buy your translation software!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *