Suppr超能文献

自动化评估技术在在线专业翻译人员培训中的作用。

The role of automated evaluation techniques in online professional translator training.

作者信息

Munkova Dasa, Munk Michal, Benko Ľubomír, Hajek Petr

机构信息

Department of Translation Studies, Constantine the Philosopher University in Nitra, Nitra, Slovakia.

Department of Computer Science, Constantine the Philosopher University in Nitra, Nitra, Slovakia.

出版信息

PeerJ Comput Sci. 2021 Oct 4;7:e706. doi: 10.7717/peerj-cs.706. eCollection 2021.

Abstract

The rapid technologisation of translation has influenced the translation industry's direction towards machine translation, post-editing, subtitling services and video content translation. Besides, the pandemic situation associated with COVID-19 has rapidly increased the transfer of business and education to the virtual world. This situation has motivated us not only to look for new approaches to online translator training, which requires a different method than learning foreign languages but in particular to look for new approaches to assess translator performance within online educational environments. Translation quality assessment is a key task, as the concept of quality is closely linked to the concept of optimization. Automatic metrics are very good indicators of quality, but they do not provide sufficient and detailed linguistic information about translations or post-edited machine translations. However, using their residuals, we can identify the segments with the largest distances between the post-edited machine translations and machine translations, which allow us to focus on a more detailed textual analysis of suspicious segments. We introduce a unique online teaching and learning system, which is specifically "tailored" for online translators' training and subsequently we focus on a new approach to assess translators' competences using evaluation techniques-the metrics of automatic evaluation and their residuals. We show that the residuals of the metrics of accuracy (BLEU_n) and error rate (PER, WER, TER, CDER, and HTER) for machine translation post-editing are valid for translator assessment. Using the residuals of the metrics of accuracy and error rate, we can identify errors in post-editing (critical, major, and minor) and subsequently utilize them in more detailed linguistic analysis.

摘要

翻译技术的迅速发展影响了翻译行业向机器翻译、译后编辑、字幕服务和视频内容翻译的方向转变。此外,与新冠疫情相关的大流行状况迅速加速了商业和教育向虚拟世界的转移。这种情况促使我们不仅要寻找在线翻译培训的新方法,这需要一种不同于学习外语的方法,而且尤其要寻找在在线教育环境中评估翻译人员表现的新方法。翻译质量评估是一项关键任务,因为质量的概念与优化的概念紧密相连。自动指标是很好的质量指标,但它们无法提供有关翻译或译后编辑机器翻译的充分且详细的语言信息。然而,利用它们的残差,我们可以识别译后编辑机器翻译与机器翻译之间距离最大的片段,这使我们能够专注于对可疑片段进行更详细的文本分析。我们引入了一个独特的在线教学系统,该系统是专门为在线翻译人员培训“量身定制”的,随后我们专注于一种使用评估技术——自动评估指标及其残差来评估翻译人员能力的新方法。我们表明,机器翻译译后编辑的准确率指标(BLEU_n)和错误率指标(PER、WER、TER、CDER和HTER)的残差对于翻译人员评估是有效的。利用准确率和错误率指标的残差,我们可以识别译后编辑中的错误(严重、主要和次要),并随后将它们用于更详细的语言分析。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68ac/8507487/9d2b2e5019dc/peerj-cs-07-706-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验