«Workshop on Post-Editing Technology and Practice Sharon O’Brien Michel Simard Lucia Specia WORKSHOP The 11th Conference of the Association for ...»
Third Workshop on Post-Editing
Technology and Practice
The 11th Conference of the Association for Machine Translation in the Americas
The 11th Conference of the Association for Machine Translation in the Americas
October 22 – 26, 2014 -- Vancouver, BC Canada
Proceedings of the
Third Workshop on Post-Editing Technology and Practice (WPTP-3) Sharon O’Brien, Michel Simard and Lucia Specia (Eds.) Association for Machine Translation in the Americas http://www.amtaweb.org Table of Contents 5 MT Post-editing into the mother tongue or into a foreign language? Spanish-to-English MT translation output post-edited by translation trainees Pilar Sánchez Gijón and Olga Torres-Hostench 20 Comparison of post-editing productivity between professional translators and lay users Nora Aranberri, Gorka Labaka, Arantza Diaz de Ilarraza, Kepa Sarasola 34 Monolingual Post-Editing by a Domain Expert is Highly Effective for Translation Triage Lane Schwartz 45 Perceived vs. measured performance in the post-editing of suggestions from machine translation and translation memories Carlos S. C. Teixeira 60 Perception vs Reality: Measuring Machine Translation Post-Editing Productivity Federico Gaspari, Antonio Toral, Sudip Kumar Naskar, Declan Groves, Andy Way 73 Cognitive Demand and Cognitive Effort in Post-Editing Isabel Lacruz, Michael Denkowski, Alon Lavie 85 Vocabulary Accuracy of Statistical Machine Translation in the Legal Context Jeffrey Killman 99 Towards desktop-based CAT tool instrumentation John Moran, Christian Saam, Dave Lewis 113 Translation Quality in Post-Edited versus Human-Translated Segments: A Case Study Elaine O’Curran Demos 119 TAUS Post-Editing course Attila Görög 120 TAUS Post-editing Productivity Tool Attila Görög 121 QuEst: a Framework for Translation Quality Estimation Lucia Specia and Kashif Shah 122 An Open Source Desktop Post-Editing Tool Lane Schwartz 123 Real Time Adaptive Machine Translation: cdec and TransCenter Michael Denkowski, Alon Lavie, Isabel Lacruz, and Chris Dyer 124 Post-editing User Interface Using Visualization of a Sentence Structure Yudai Kishimoto, Toshiaki Nakazawa, Daisuke Kawahara, and Sadao Kurohashi 125 Kanjingo: A Mobile App for Post-Editing Sharon O'Brien, Joss Moorkens, and Joris Vreeke Proceedings of the Third Workshop on Post-editing Technology and Practice (WPTP-3) Edited by Sharon O’Brien, Michel Simard and Lucia Specia AMTA Workshop, Vancouver, Canada, October 26 2014 Committees Organizing Committee Sharon O'Brien - Dublin City University Michel Simard - National Research Council Canada Lucia Specia - University of Sheffield Joss Moorkens - Dublin City University Program Committee Nora Aranberri -- University of the Basque Country Diego Bartolome -- tauyou language technology Michael Carl -- Copenhagen Business School Francisco Casacuberta -- Universitat Politècnica de València Stephen Doherty -- University of Western Sydney Andreas Eisele -- European Commission Marcello Federico -- FBK-IRST Mikel L. Forcada -- Universitat d’Alacant Philipp Koehn -- University of Edinburgh Roland Kuhn -- National Research Council Canada Isabel Lacruz -- Kent State University Alon Lavie -- Carnegie Mellon University
Daniel Marcu -- University of Southern California John Moran -- Transpiral Translation Services Kristen Parton -- Columbia University Maja Popović -- DFKI Johann Roturier -- Symantec Midori Tatsumi -- Independent Researcher/Lecturer Andy Way -- CNGL / Dublin City University Programme 9:00 - 10:30 Session 1 9:00 MT Post-Editing into the Mother Tongue or into a Foreign Language? Spanish-English MT Output Post-Edited by Translation Trainees Pilar Sánchez Gijon and Olga Torres Hostench 9:30 Comparison of Post-Editing Productivity between Professional Translators and Lay Users Nora Aranberri, Gorka Labaka, Arantza Diaz de Iiarraza, and Kepa Sarasola 10:00 Monolingual Post-Editing by a Domain Expert is Highly Effective for Translation Triage
10:30 - 11:00 Coffee Break 11:00 - 12:30 Session 2 11:00 Perceived vs. Measured Performance in the Post-Editing of Suggestions from Machine Translation and Translation Memories
Federico Gaspari, Antonio Toral, Sudip Kumar Naskar, and Declan Groves 12:00 Cognitive Demand and Cognitive Effort in Post-Editing Isabel Lacruz, Michael Denkowski and Alon Lavie 12:30 - 14:00 Lunch Break 14:00 - 16:00 Posters and Demos Vocabulary Accuracy of Statistical Machine Translation in the Legal Context Jeffrey Killman Towards Desktop-Based CAT Tool Instrumentation -- iOmegaT John Moran, David Lewis and Christian Saam Translation Quality in Post-Edited versus Human-Translated Segments: A Case Study Elaine O'Curran The TAUS Post-Editing Course & The TAUS Post-editing Productivity Tools Attila Görög QuEst for Estimating Post-Editing Effort Lucia Specia An Open-Source Desktop Post-Editing Tool Lane Schwartz
Michael Denkowski Post-Editing User Interface Using Visualization of a Sentence Structure Yudai Kishimoto, Toshiaki Nakazawa, Daisuke Kawahara and Sadao Kurohashi Kanjingo: A Mobile App for Post-Editing Sharon O’Brien 16:00 - 17:30 Panel: What Lies Ahead for Post-editing?
Moderator: Mike Dillinger (AMTA President)
Olga Beragovaya (Welocalize) John Moran (CNGL, TCD) David Rumsey (President-Elect at American Translators’ Association) Lori Thicke (Translators Without Borders) Chris Wendt (Microsoft)
Pilar Sánchez-Gijón firstname.lastname@example.org Olga Torres-Hostench email@example.com Tradumàtica Research Group, Departament of Translation, Interpreting and Eastern Studies, Universitat Autònoma de Barcelona, Bellaterra, 08193, Spain Abstract The aim of this study is to analyse whether translation trainees who are not native speakers of the target language are able to perform as well as those who are native speakers, and whether they achieve the expected quality in a “good enough” post-editing (PE) job. In particular the study focuses on the performance of two groups of students doing PE from Spanish into English: native English speakers and native Spanish speakers. A pilot study was set up to collect evidence to compare and contrast the two groups‟ performances. Trainees from both groups had been given the same training in PE and were asked to post-edit 30 sentences translated from Spanish to English. The PE output was analyzed taking into account accuracy errors (mistranslations and omissions) as well as language errors (grammatical errors and syntax errors). The results show that some native Spanish speakers corrected just as many errors as the native English speakers. Furthermore, the Spanish-speaking trainees outperformed their English-speaking counterparts when identifying mistranslations and omissions. Moreover, the performances of the best English-speaking and Spanish-speaking trainees at identifying grammar and syntax errors were very similar.
Since UNESCO issued its recommendation, more and more translation companies and translation faculties have been adopting this “mother-tongue principle”, with excellent results.
However, various authors have questioned this principle. Campbell (1998:212) argues that the “dynamics of immigration, international commerce and the postcolonial world make it inevitable that much translation is done into a second language, despite the prevailing wisdom that translators should only work into their mother tongue.” Kelly (2003) defends the same arguments of necessity, and Pokorn (2005: X) is perhaps the most critical. The latter argues that the traditional view “according to which translators should translate only into their mother tongue in order to create linguistically and culturally-acceptable translations (…) stems from an aprioristic conviction unsupported by any scientific proof that translation into a mother tongue is ipso facto superior to translation into a non-mother tongue.” 5 Is the same principle applicable to post-editing (PE)? Should PE also adhere blindly to this principle? Marcel Thelen, a supporter of non-native translators (2005:250), argues that the principle is too rigid and questions the UNESCO recommendation. The following extract from Thelen‟s book unintentionally became the starting point for the research presented in this paper.
“Applying the mother tongue principle seems to have become a sort of quality assurance, part of a guarantee of specialisation. Sticking to the native speaker rule is, however, not necessary in many cases, especially since clients do not all require the same quality of translations depending on the envisaged purpose. (…) In addition, with the implementation of technology and different kinds of translation tools, it becomes increasingly „easy‟ for non-natives speakers to produce good English through post-editing.” Is this true? Is it really so easy for non-native speakers to produce “good English”? In what sense would PE quality be affected if it were carried out by non-native speakers? This study attempts to discover whether non-native translation trainees could provide as good PE (in terms of accuracy and language) as native translation trainees. We conducted an empirical
study in which a PE task from Spanish into English was carried out by two groups of subjects:
non-native translation trainees and native translation trainees. The two groups were asked to post-edit several sentences from the user interface and help file of the OpenOffice software package. The aim of this study was to compare the results of the PE carried out by the two groups in terms of accuracy and language, and thus determine whether non-native translation trainees are able to meet the expected quality standards.
Traditionally, it was supposed there were two levels of PE ―light PE and full PE― although TAUS prefers to talk about “good enough quality” and “quality similar to a human translator”. In our study, we expected non-native translation trainees to achieve “good enough quality” (TAUS 2010). The above list shows that in PE that is considered “good enough quality” expectations of the quality of language used are low, whereas accuracy is very important. Accuracy is non-negotiable both in light and full PE. So, if non-native speakers are able to provide accurate PE then we would need to bring into question the mother-tongue principle for PE.
2. Related work
Native translation professionals seem to be the best suited for any PE job. Guerberof (2008, 2009, 2012) analysed the productivity and the quality of PE from the translation memories (TM) and machine translation (MT) output of native professional translators; Plitt and Masselot (2010) tested productivity by comparing MT+PE with traditional translation by native professional translators; Almeida and O‟Brien (2010) compared PE performance with professional translation experience, and Temizoz‟s (2013) compared the differences in PE performance between engineers and professional translators. Other interesting studies of different post-editor profiles are Koehn‟s (2010) on PE by monolingual users and Mitchell, Roturier and O‟Brien‟s (2013) who compare PE by monolingual users vs. that of bilingual users.
Many companies and organizations also rely on native speaking professional translators: the Commission of the European Communities worked with professionals on Systran PE (Wagner 1985); Sybase worked with professionals on PangeaMT PE (Bier and Herranz, 2011); and Continental Airlines worked with professionals on SDL PE (Beaton and Contreras, 2010).
Some organizations, however, are exploring other post-editor profiles. Computer Associates, for instance, is developing a PE crowdsourcing platform where any person who 6 knows two languages could become a post-editor and quality would be assessed by ranking the PE output (Muntés-Mulero and Paladini, 2012; Muntés-Mulero et al., 2012).
In an academic context, some researchers have carried out studies on PE using native translation students. Sutter and Depraetere (2012) analysed the relationship between PE, distance and fluency using translation trainees and O‟Brien (2005) observed the correlation between PE effort and MT translatability. Especially relevant for our study is Garcia (2010) whose study on PE quality and the time taken for the task, used Chinese non-English native translation trainees, comparing their MT+PE with a translation made using a TM.
In light of the related literature, our contribution aims to explore a factor in the posteditors‟ profile that has been largely unexplored so far (except in the cases mentioned above):
their mother tongue.
In this paper, we will check the following hypothesis: “PE jobs performed by native translation trainees will be more accurate and linguistically correct than those performed by nonnative translation trainees”. In order to investigate whether this hypothesis is valid, we will try
to answer the following research questions:
To what extent is PE performed by non-native translation trainees accurate?
To what extent is PE performed by non-native translation trainees linguistically correct?
As stated in the introduction, our main focus was to establish what level of PE accuracy and linguistic correctness non-native translation trainees can produce taking into account their presumed poorer use of the foreign language compared with that of native speaking translation trainees. Accuracy was analysed by evaluating post-edits of mistranslations and omissions; language was analysed by evaluating post-edits of grammatical and syntax errors.
The results of this study may be useful when taking decisions on PE training programs.
3.1. Preparation of the corpus The sentences to be post-edited were taken from the English-Spanish bitext of OpenOffice (Tiedemann 2009). We downloaded the TMX file for the en_GB and es languages (50.6k).
The characteristics of this corpus made it a good choice for our study: