• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过非平行语料库改进低资源语言的神经机器翻译:以埃及方言到现代标准阿拉伯语的翻译为例

Improving neural machine translation for low resource languages through non-parallel corpora: a case study of Egyptian dialect to modern standard Arabic translation.

作者信息

Faheem Mohamed Atta, Wassif Khaled Tawfik, Bayomi Hanaa, Abdou Sherif Mahdy

机构信息

Department of Computer Science, Faculty of Computers and Artificial Intelligence, Cairo University, Cairo, Egypt.

Department of Information Technology, Faculty of Computers and Artificial Intelligence, Cairo University, Cairo, Egypt.

出版信息

Sci Rep. 2024 Jan 27;14(1):2265. doi: 10.1038/s41598-023-51090-4.

DOI:10.1038/s41598-023-51090-4
PMID:38280911
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10821946/
Abstract

Machine translation for low-resource languages poses significant challenges, primarily due to the limited availability of data. In recent years, unsupervised learning has emerged as a promising approach to overcome this issue by aiming to learn translations between languages without depending on parallel data. A wide range of methods have been proposed in the literature to address this complex problem. This paper presents an in-depth investigation of semi-supervised neural machine translation specifically focusing on translating Arabic dialects, particularly Egyptian, to Modern Standard Arabic. The study employs two distinct datasets: one parallel dataset containing aligned sentences in both dialects, and a monolingual dataset where the source dialect is not directly connected to the target language in the training data. Three different translation systems are explored in this study. The first is an attention-based sequence-to-sequence model that benefits from the shared vocabulary between the Egyptian dialect and Modern Arabic to learn word embeddings. The second is an unsupervised transformer model that depends solely on monolingual data, without any parallel data. The third system starts with the parallel dataset for an initial supervised learning phase and then incorporates the monolingual data during the training process.

摘要

低资源语言的机器翻译面临重大挑战,主要原因是数据可用性有限。近年来,无监督学习已成为一种有前途的方法,旨在通过在不依赖平行数据的情况下学习语言之间的翻译来克服这一问题。文献中已经提出了各种各样的方法来解决这个复杂的问题。本文对半监督神经机器翻译进行了深入研究,特别关注将阿拉伯方言,尤其是埃及方言,翻译成现代标准阿拉伯语。该研究使用了两个不同的数据集:一个平行数据集,包含两种方言的对齐句子,以及一个单语数据集,其中源方言在训练数据中与目标语言没有直接联系。本研究探索了三种不同的翻译系统。第一种是基于注意力的序列到序列模型,它受益于埃及方言和现代阿拉伯语之间共享的词汇表来学习词嵌入。第二种是无监督变压器模型,它仅依赖单语数据,没有任何平行数据。第三个系统首先在初始监督学习阶段使用平行数据集,然后在训练过程中纳入单语数据。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f77/10821946/e81677ebbe33/41598_2023_51090_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f77/10821946/2d416f6cdbb4/41598_2023_51090_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f77/10821946/3034a97d5c84/41598_2023_51090_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f77/10821946/e81677ebbe33/41598_2023_51090_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f77/10821946/2d416f6cdbb4/41598_2023_51090_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f77/10821946/3034a97d5c84/41598_2023_51090_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f77/10821946/e81677ebbe33/41598_2023_51090_Fig3_HTML.jpg

相似文献

1
Improving neural machine translation for low resource languages through non-parallel corpora: a case study of Egyptian dialect to modern standard Arabic translation.通过非平行语料库改进低资源语言的神经机器翻译:以埃及方言到现代标准阿拉伯语的翻译为例
Sci Rep. 2024 Jan 27;14(1):2265. doi: 10.1038/s41598-023-51090-4.
2
A Transformer-Based Neural Machine Translation Model for Arabic Dialects That Utilizes Subword Units.基于利用子词单元的阿拉伯方言的基于转换器的神经机器翻译模型。
Sensors (Basel). 2021 Sep 29;21(19):6509. doi: 10.3390/s21196509.
3
Semantic textual similarity for modern standard and dialectal Arabic using transfer learning.基于迁移学习的现代标准阿拉伯语和方言的语义文本相似度研究。
PLoS One. 2022 Aug 11;17(8):e0272991. doi: 10.1371/journal.pone.0272991. eCollection 2022.
4
The neural machine translation models for the low-resource Kazakh-English language pair.针对低资源哈萨克语-英语语言对的神经机器翻译模型。
PeerJ Comput Sci. 2023 Feb 8;9:e1224. doi: 10.7717/peerj-cs.1224. eCollection 2023.
5
Improving Neural Machine Translation for Low Resource Algerian Dialect by Transductive Transfer Learning Strategy.通过转导迁移学习策略改进低资源阿尔及利亚方言的神经机器翻译
Arab J Sci Eng. 2022;47(8):10411-10418. doi: 10.1007/s13369-022-06588-w. Epub 2022 Feb 8.
6
ArzEn-MultiGenre: An aligned parallel dataset of Egyptian Arabic song lyrics, novels, and subtitles, with English translations.ArzEn-多体裁:一个包含埃及阿拉伯语歌曲歌词、小说和字幕以及英文翻译的对齐平行数据集。
Data Brief. 2024 Feb 29;54:110271. doi: 10.1016/j.dib.2024.110271. eCollection 2024 Jun.
7
A Neural Machine Translation Model for Arabic Dialects That Utilises Multitask Learning (MTL).基于多任务学习 (MTL) 的阿拉伯语方言神经机器翻译模型。
Comput Intell Neurosci. 2018 Dec 10;2018:7534712. doi: 10.1155/2018/7534712. eCollection 2018.
8
A7׳ta: Data on a monolingual Arabic parallel corpus for grammar checking.A7׳ta:关于用于语法检查的单语阿拉伯语平行语料库的数据。 (注:这里的“A7׳ta”可能是特定的名称或术语,由于不清楚其确切含义,所以保留原样翻译)
Data Brief. 2018 Dec 4;22:237-240. doi: 10.1016/j.dib.2018.11.146. eCollection 2019 Feb.
9
Arabic punctuation dataset.阿拉伯语标点符号数据集。
Data Brief. 2024 Feb 1;53:110118. doi: 10.1016/j.dib.2024.110118. eCollection 2024 Apr.
10
Translation of questionnaires into Arabic in cross-cultural research: techniques and equivalence issues.在跨文化研究中问卷的阿拉伯语翻译:技巧和等效问题。
J Transcult Nurs. 2013 Oct;24(4):363-70. doi: 10.1177/1043659613493440. Epub 2013 Jul 8.

引用本文的文献

1
Research on optimal deep learning modeling in HaiNan dialect recognition.海南方言识别中最优深度学习建模的研究
Sci Rep. 2025 Aug 28;15(1):31735. doi: 10.1038/s41598-025-17569-y.
2
VERA-ARAB: unveiling the Arabic tweets credibility by constructing balanced news dataset for veracity analysis.VERA-ARAB:通过构建用于真实性分析的平衡新闻数据集来揭示阿拉伯语推文的可信度。
PeerJ Comput Sci. 2024 Oct 30;10:e2432. doi: 10.7717/peerj-cs.2432. eCollection 2024.

本文引用的文献

1
A Neural Machine Translation Model for Arabic Dialects That Utilises Multitask Learning (MTL).基于多任务学习 (MTL) 的阿拉伯语方言神经机器翻译模型。
Comput Intell Neurosci. 2018 Dec 10;2018:7534712. doi: 10.1155/2018/7534712. eCollection 2018.
2
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.