Zheng Xiaofeng, Zhang Jian
School of International Exchange, Changchun Guanghua University, Changchun, 130000, China.
Sci Rep. 2025 Jun 2;15(1):19268. doi: 10.1038/s41598-025-05026-9.
The need for personalized and real-time feedback in English writing instruction is increasing rapidly. Traditional systems, which depend on rule-based engines and shallow machine learning models, struggle to meet this demand. They often fall short in addressing key aspects such as grammar correction, sentence variety, and logical coherence. This study introduces a multidimensional feedback system based on the Transformer architecture. The system combines self-attention mechanisms with a dynamic parameter adjustment module to deliver feedback at multiple levels-from individual words to entire paragraphs. A BERT model is fine-tuned on a large, diverse corpus that includes academic papers, blog posts, and student essays. As a result, the system can provide real-time suggestions that address grammar, vocabulary, sentence structure, and logic. Experimental results show that the system improves the writing quality of non-native learners while maintaining a feedback delay of just 1.8 s. Its modular design allows for the customization of learning paths, and user privacy is protected through differential privacy mechanisms. This approach offers a technically sound and educationally practical solution for developing AI-assisted writing tools across disciplines.
在英语写作教学中,对个性化实时反馈的需求正在迅速增长。传统系统依赖基于规则的引擎和浅层机器学习模型,难以满足这一需求。它们在处理语法纠正、句子多样性和逻辑连贯性等关键方面往往存在不足。本研究引入了一种基于Transformer架构的多维反馈系统。该系统将自注意力机制与动态参数调整模块相结合,以在从单个单词到整个段落的多个层面提供反馈。一个BERT模型在一个大型、多样的语料库上进行微调,该语料库包括学术论文、博客文章和学生作文。结果,该系统可以提供针对语法、词汇、句子结构和逻辑的实时建议。实验结果表明,该系统提高了非母语学习者的写作质量,同时反馈延迟仅为1.8秒。其模块化设计允许定制学习路径,并且通过差分隐私机制保护用户隐私。这种方法为跨学科开发人工智能辅助写作工具提供了技术上合理且教育上实用的解决方案。