Involving Language Professionals in the Evaluation of Machine Translation

Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12) |

Significant breakthroughs in machine translation only seem possible if human translators are taken into the loop. While automatic
evaluation and scoring mechanisms such as BLEU have enabled the fast development of systems, it is not clear how systems can meet
real-world (quality) requirements in industrial translation scenarios today. The TARAX˝U project paves the way for wide usage of hybrid
machine translation outputs through various feedback loops in system development. In a consortium of research and industry partners,
the project integrates human translators into the development process for rating and post-editing of machine translation outputs thus
collecting feedback for possible improvements.