• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用强化学习和深度学习联合提取实体与关系

Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning.

作者信息

Feng Yuntian, Zhang Hongjun, Hao Wenning, Chen Gang

机构信息

Institute of Command Information System, PLA University of Science and Technology, Nanjing, Jiangsu 210007, China.

出版信息

Comput Intell Neurosci. 2017;2017:7643065. doi: 10.1155/2017/7643065. Epub 2017 Aug 14.

DOI:10.1155/2017/7643065
PMID:28894463
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5574273/
Abstract

We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ -Learning algorithm to get control policy in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score.

摘要

我们使用强化学习和深度学习从非结构化文本中同时提取实体和关系。对于强化学习,我们将该任务建模为一个两步决策过程。深度学习用于自动从非结构化文本中捕获最重要的信息,这些信息代表决策过程中的状态。通过设计每一步的奖励函数,我们提出的方法可以将实体提取的信息传递给关系提取,并获得反馈,以便同时提取实体和关系。首先,我们使用双向长短期记忆网络(bidirectional LSTM)对上下文信息进行建模,实现初步的实体提取。在提取结果的基础上,基于注意力的方法可以表示包含目标实体对的句子,以生成决策过程中的初始状态。然后我们使用树状长短期记忆网络(Tree-LSTM)来表示关系提及,以生成决策过程中的过渡状态。最后,我们采用强化学习算法在两步决策过程中获得控制策略。在ACE2005上的实验表明,我们的方法比现有最先进的方法具有更好的性能,召回率得分提高了2.4%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/25f730f2215b/CIN2017-7643065.alg.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/98a209e48887/CIN2017-7643065.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/bcd35ad60633/CIN2017-7643065.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/b32e8976e84e/CIN2017-7643065.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/29270d033774/CIN2017-7643065.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/f4c1ab4f4f4c/CIN2017-7643065.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/5e79e5372460/CIN2017-7643065.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/a1ef53bc97cf/CIN2017-7643065.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/732a9841ea04/CIN2017-7643065.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/25f730f2215b/CIN2017-7643065.alg.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/98a209e48887/CIN2017-7643065.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/bcd35ad60633/CIN2017-7643065.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/b32e8976e84e/CIN2017-7643065.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/29270d033774/CIN2017-7643065.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/f4c1ab4f4f4c/CIN2017-7643065.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/5e79e5372460/CIN2017-7643065.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/a1ef53bc97cf/CIN2017-7643065.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/732a9841ea04/CIN2017-7643065.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3e5/5574273/25f730f2215b/CIN2017-7643065.alg.001.jpg

相似文献

1
Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning.使用强化学习和深度学习联合提取实体与关系
Comput Intell Neurosci. 2017;2017:7643065. doi: 10.1155/2017/7643065. Epub 2017 Aug 14.
2
Entity recognition from clinical texts via recurrent neural network.基于循环神经网络的临床文本实体识别。
BMC Med Inform Decis Mak. 2017 Jul 5;17(Suppl 2):67. doi: 10.1186/s12911-017-0468-7.
3
Feedback for reinforcement learning based brain-machine interfaces using confidence metrics.基于置信度指标的用于脑机接口的强化学习反馈
J Neural Eng. 2017 Jun;14(3):036016. doi: 10.1088/1741-2552/aa6317. Epub 2017 Feb 27.
4
Extraction of Information Related to Drug Safety Surveillance From Electronic Health Record Notes: Joint Modeling of Entities and Relations Using Knowledge-Aware Neural Attentive Models.从电子健康记录笔记中提取与药物安全监测相关的信息:使用知识感知神经注意力模型对实体和关系进行联合建模
JMIR Med Inform. 2020 Jul 10;8(7):e18417. doi: 10.2196/18417.
5
A Relation-Oriented Model With Global Context Information for Joint Extraction of Overlapping Relations and Entities.一种具有全局上下文信息的面向关系模型,用于重叠关系和实体的联合提取。
Front Neurorobot. 2022 Jul 4;16:914705. doi: 10.3389/fnbot.2022.914705. eCollection 2022.
6
Improving the Named Entity Recognition of Chinese Electronic Medical Records by Combining Domain Dictionary and Rules.通过结合领域字典和规则来提高中文电子病历的命名实体识别。
Int J Environ Res Public Health. 2020 Apr 14;17(8):2687. doi: 10.3390/ijerph17082687.
7
An attentive joint model with transformer-based weighted graph convolutional network for extracting adverse drug event relation.基于注意力机制的联合模型与基于转换器的加权图卷积网络在提取药物不良反应关系中的应用。
J Biomed Inform. 2022 Jan;125:103968. doi: 10.1016/j.jbi.2021.103968. Epub 2021 Dec 4.
8
Model-based reinforcement learning with dimension reduction.基于模型的降维强化学习。
Neural Netw. 2016 Dec;84:1-16. doi: 10.1016/j.neunet.2016.08.005. Epub 2016 Aug 24.
9
Leveraging a Joint learning Model to Extract Mixture Symptom Mentions from Traditional Chinese Medicine Clinical Notes.利用联合学习模型从中医临床记录中提取混合症状提及。
Biomed Res Int. 2022 Mar 8;2022:2146236. doi: 10.1155/2022/2146236. eCollection 2022.
10
A Sentence-Level Joint Relation Classification Model Based on Reinforcement Learning.一种基于强化学习的句子级联合关系分类模型。
Comput Intell Neurosci. 2021 May 26;2021:5557184. doi: 10.1155/2021/5557184. eCollection 2021.

引用本文的文献

1
Protein-Protein Interaction Network Extraction Using Text Mining Methods Adds Insight into Autism Spectrum Disorder.使用文本挖掘方法提取蛋白质-蛋白质相互作用网络可深入了解自闭症谱系障碍。
Biology (Basel). 2023 Oct 18;12(10):1344. doi: 10.3390/biology12101344.
2
Deep Multi-Scale Residual Connected Neural Network Model for Intelligent Athlete Balance Control Ability Evaluation.深度多尺度残差连接神经网络模型在智能运动员平衡控制能力评估中的应用。
Comput Intell Neurosci. 2022 May 26;2022:9012709. doi: 10.1155/2022/9012709. eCollection 2022.
3
Neuroimaging-ITM: A Text Mining Pipeline Combining Deep Adversarial Learning with Interaction Based Topic Modeling for Enabling the FAIR Neuroimaging Study.

本文引用的文献

1
Context transfer in reinforcement learning using action-value functions.基于动作值函数的强化学习中的上下文转移
Comput Intell Neurosci. 2014;2014:428567. doi: 10.1155/2014/428567. Epub 2014 Dec 31.
2
A reinforcement learning framework for spiking networks with dynamic synapses.一种具有动态突触的尖峰网络的强化学习框架。
Comput Intell Neurosci. 2011;2011:869348. doi: 10.1155/2011/869348. Epub 2011 Oct 23.
3
Long short-term memory.长短期记忆
神经影像学-信息技术管理:一种将深度对抗学习与基于交互的主题建模相结合的文本挖掘管道,以推动可信赖的神经影像学研究。
Neuroinformatics. 2022 Jul;20(3):701-726. doi: 10.1007/s12021-022-09571-w. Epub 2022 Mar 2.
4
A Sentence-Level Joint Relation Classification Model Based on Reinforcement Learning.一种基于强化学习的句子级联合关系分类模型。
Comput Intell Neurosci. 2021 May 26;2021:5557184. doi: 10.1155/2021/5557184. eCollection 2021.
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.