• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MTQA:基于文本的多类型问答阅读理解模型。

MTQA: Text-Based Multitype Question and Answer Reading Comprehension Model.

作者信息

Chen Deguang, Ma Ziping, Wei Lin, Ma Jinlin, Zhu Yanbin

机构信息

School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China.

School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China.

出版信息

Comput Intell Neurosci. 2021 Feb 18;2021:8810366. doi: 10.1155/2021/8810366. eCollection 2021.

DOI:10.1155/2021/8810366
PMID:33679967
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7910065/
Abstract

Text-based multitype question answering is one of the research hotspots in the field of reading comprehension models. Multitype reading comprehension models have the characteristics of shorter time to propose, complex components of relevant corpus, and greater difficulty in model construction. There are relatively few research works in this field. Therefore, it is urgent to improve the model performance. In this paper, a text-based multitype question and answer reading comprehension model (MTQA) is proposed. The model is based on a multilayer transformer encoding and decoding structure. In the decoding structure, the headers of the answer type prediction decoding, fragment decoding, arithmetic decoding, counting decoding, and negation are added for the characteristics of multiple types of corpora. Meanwhile, high-performance ELECTRA checkpoints are employed, and secondary pretraining based on these checkpoints and an absolute loss function are designed to improve the model performance. The experimental results show that the performance of the proposed model on the DROP and QUOREF corpora is better than the best results of the current existing models, which proves that the proposed MTQA model has high feature extraction and relatively strong generalization capabilities.

摘要

基于文本的多类型问答是阅读理解模型领域的研究热点之一。多类型阅读理解模型具有提出时间较短、相关语料库组件复杂以及模型构建难度较大的特点。该领域的研究工作相对较少。因此,提高模型性能迫在眉睫。本文提出了一种基于文本的多类型问答阅读理解模型(MTQA)。该模型基于多层Transformer编码和解码结构。在解码结构中,针对多种类型语料库的特点,增加了答案类型预测解码、片段解码、算术解码、计数解码和否定的头部。同时,采用了高性能的ELECTRA检查点,并基于这些检查点设计了二次预训练和绝对损失函数来提高模型性能。实验结果表明,所提出的模型在DROP和QUOREF语料库上的性能优于当前现有模型的最佳结果,这证明了所提出的MTQA模型具有较高的特征提取能力和较强的泛化能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/ecfecfe1412d/CIN2021-8810366.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/44147aa43810/CIN2021-8810366.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/6426c50b1f33/CIN2021-8810366.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/677ab13e2593/CIN2021-8810366.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/b707d992d01a/CIN2021-8810366.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/5a976a11db64/CIN2021-8810366.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/ecfecfe1412d/CIN2021-8810366.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/44147aa43810/CIN2021-8810366.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/6426c50b1f33/CIN2021-8810366.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/677ab13e2593/CIN2021-8810366.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/b707d992d01a/CIN2021-8810366.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/5a976a11db64/CIN2021-8810366.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/941d/7910065/ecfecfe1412d/CIN2021-8810366.006.jpg

相似文献

1
MTQA: Text-Based Multitype Question and Answer Reading Comprehension Model.MTQA:基于文本的多类型问答阅读理解模型。
Comput Intell Neurosci. 2021 Feb 18;2021:8810366. doi: 10.1155/2021/8810366. eCollection 2021.
2
HCT: Chinese Medical Machine Reading Comprehension Question-Answering via Hierarchically Collaborative Transformer.HCT:基于层次协作 Transformer 的中文医学机器阅读理解问答
IEEE J Biomed Health Inform. 2024 May;28(5):3055-3066. doi: 10.1109/JBHI.2024.3368288. Epub 2024 May 6.
3
Efficient Machine Reading Comprehension for Health Care Applications: Algorithm Development and Validation of a Context Extraction Approach.用于医疗保健应用的高效机器阅读理解:上下文提取方法的算法开发与验证
JMIR Form Res. 2024 Mar 25;8:e52482. doi: 10.2196/52482.
4
Reading comprehension based question answering system in Bangla language with transformer-based learning.基于基于变压器学习的孟加拉语阅读理解问答系统。
Heliyon. 2022 Oct 12;8(10):e11052. doi: 10.1016/j.heliyon.2022.e11052. eCollection 2022 Oct.
5
Analysis of English Multitext Reading Comprehension Model Based on Deep Belief Neural Network.基于深度置信神经网络的英语多文本阅读理解模型分析。
Comput Intell Neurosci. 2021 Sep 15;2021:5100809. doi: 10.1155/2021/5100809. eCollection 2021.
6
Examining reading comprehension text and question answering time differences in university students with and without a history of reading difficulties.研究有阅读困难史和无阅读困难史的大学生在阅读理解文本和回答问题方面的时间差异。
Ann Dyslexia. 2018 Apr;68(1):15-24. doi: 10.1007/s11881-017-0153-7. Epub 2017 Nov 17.
7
The role of speech prosody and text reading prosody in children's reading comprehension.言语韵律和文本阅读韵律在儿童阅读理解中的作用。
Br J Educ Psychol. 2014 Dec;84(Pt 4):521-36. doi: 10.1111/bjep.12036. Epub 2014 May 2.
8
The Contribution of Text Characteristics to Reading Comprehension: Investigating the Influence of Text Emotionality.文本特征对阅读理解的贡献:探究文本情感性的影响。
Read Res Q. 2022 Apr-Jun;57(2):649-667. doi: 10.1002/rrq.431. Epub 2021 Jun 28.
9
Assessment of inference-making in children using comprehension questions and story retelling: Effect of text modality and a story presentation format.使用理解性问题和故事复述对儿童推理能力的评估:文本模态和故事呈现形式的影响。
Int J Lang Commun Disord. 2021 May;56(3):637-652. doi: 10.1111/1460-6984.12620.
10
Brain basis of cognitive resilience: Prefrontal cortex predicts better reading comprehension in relation to decoding.认知弹性的大脑基础:前额叶皮层与解码能力相关,预测更好的阅读理解能力。
PLoS One. 2018 Jun 14;13(6):e0198791. doi: 10.1371/journal.pone.0198791. eCollection 2018.

引用本文的文献

1
On solving textual ambiguities and semantic vagueness in MRC based question answering using generative pre-trained transformers.基于生成式预训练变换器解决基于机器阅读理解的问答中的文本歧义与语义模糊问题。
PeerJ Comput Sci. 2023 Jul 24;9:e1422. doi: 10.7717/peerj-cs.1422. eCollection 2023.
2
A Comprehensive Survey of Abstractive Text Summarization Based on Deep Learning.基于深度学习的抽象文本摘要综述
Comput Intell Neurosci. 2022 Aug 1;2022:7132226. doi: 10.1155/2022/7132226. eCollection 2022.

本文引用的文献

1
Explainable Personality Prediction Using Answers to Open-Ended Interview Questions.利用开放式面试问题的答案进行可解释的人格预测
Front Psychol. 2022 Nov 18;13:865841. doi: 10.3389/fpsyg.2022.865841. eCollection 2022.