Suppr超能文献

英文翻译中的文本智能校正:一项关于集成具有依存注意力机制模型的研究。

Text intelligent correction in English translation: A study on integrating models with dependency attention mechanism.

作者信息

Liu Yutong, Zhang Shile

机构信息

School of Humanities and Social Sciences, Xi'an Polytechnic University, Xi'an, China.

Shaanxi Contemporary Red Culture Training and Education Center, Xi'an, China.

出版信息

PLoS One. 2025 Jun 24;20(6):e0319690. doi: 10.1371/journal.pone.0319690. eCollection 2025.

Abstract

Improving translation quality and efficiency is one of the key challenges in the field of Natural Language Processing (NLP). This study proposes an enhanced model based on Bidirectional Encoder Representations from Transformers (BERT), combined with a dependency self-attention mechanism, to automatically detect and correct textual errors in the translation process. The model aims to strengthen the understanding of sentence structure, thereby improving both the accuracy and efficiency of error correction. The research uses the Conference on Natural Language Learning (CoNLL)-2014 dataset as an experimental benchmark, which contains a rich collection of grammatical error samples and is a standard resource in linguistic research. During model training, the Adam optimization algorithm is employed, and the model's performance is enhanced by introducing a customized dependency self-attention mechanism for parameter optimization. To validate the model's effectiveness, the performance of the baseline model and the improved model is compared using multiple evaluation metrics, including accuracy, recall, F1 score, edit distance, Bilingual Evaluation Understudy (BLEU) score, and average processing time. The results show that the proposed model significantly outperforms the baseline model in terms of accuracy (improving from 0.78 to 0.85), recall (improving from 0.81 to 0.87), and F1 score (improving from 0.79 to 0.86). The average edit distance decreases from 3.2 to 2.5, the BLEU score increases from 0.65 to 0.72, and the average processing time is reduced from 2.3 seconds to 1.8 seconds. This study provides an innovative approach for intelligent text correction tasks, expands the application scenarios of the BERT model, and offers significant support for the practical implementation of NLP technologies. The findings not only highlight the advantages of the improved model but also offer new ideas and directions for future related research.

摘要

提高翻译质量和效率是自然语言处理(NLP)领域的关键挑战之一。本研究提出了一种基于变换器双向编码器表征(BERT)的增强模型,并结合依存自注意力机制,以在翻译过程中自动检测和纠正文本错误。该模型旨在加强对句子结构的理解,从而提高纠错的准确性和效率。研究使用自然语言学习会议(CoNLL)-2014数据集作为实验基准,该数据集包含丰富的语法错误样本集合,是语言学研究中的标准资源。在模型训练过程中,采用了Adam优化算法,并通过引入定制的依存自注意力机制进行参数优化来提高模型性能。为了验证模型的有效性,使用包括准确率、召回率、F1分数、编辑距离、双语评估替补(BLEU)分数和平均处理时间在内的多个评估指标,比较了基线模型和改进模型的性能。结果表明,所提出的模型在准确率(从0.78提高到0.85)、召回率(从0.81提高到0.87)和F1分数(从0.79提高到0.86)方面显著优于基线模型。平均编辑距离从3.2降至2.5,BLEU分数从0.65提高到0.72,平均处理时间从2.3秒减少到1.8秒。本研究为智能文本纠错任务提供了一种创新方法,扩展了BERT模型的应用场景,并为NLP技术的实际应用提供了重要支持。研究结果不仅突出了改进模型的优势,还为未来相关研究提供了新的思路和方向。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/df69/12186988/a743d0109be1/pone.0319690.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验