Chen Deguang, Ma Ziping, Wei Lin, Ma Jinlin, Zhu Yanbin
School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China.
School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China.
Comput Intell Neurosci. 2021 Feb 18;2021:8810366. doi: 10.1155/2021/8810366. eCollection 2021.
Text-based multitype question answering is one of the research hotspots in the field of reading comprehension models. Multitype reading comprehension models have the characteristics of shorter time to propose, complex components of relevant corpus, and greater difficulty in model construction. There are relatively few research works in this field. Therefore, it is urgent to improve the model performance. In this paper, a text-based multitype question and answer reading comprehension model (MTQA) is proposed. The model is based on a multilayer transformer encoding and decoding structure. In the decoding structure, the headers of the answer type prediction decoding, fragment decoding, arithmetic decoding, counting decoding, and negation are added for the characteristics of multiple types of corpora. Meanwhile, high-performance ELECTRA checkpoints are employed, and secondary pretraining based on these checkpoints and an absolute loss function are designed to improve the model performance. The experimental results show that the performance of the proposed model on the DROP and QUOREF corpora is better than the best results of the current existing models, which proves that the proposed MTQA model has high feature extraction and relatively strong generalization capabilities.
基于文本的多类型问答是阅读理解模型领域的研究热点之一。多类型阅读理解模型具有提出时间较短、相关语料库组件复杂以及模型构建难度较大的特点。该领域的研究工作相对较少。因此,提高模型性能迫在眉睫。本文提出了一种基于文本的多类型问答阅读理解模型(MTQA)。该模型基于多层Transformer编码和解码结构。在解码结构中,针对多种类型语料库的特点,增加了答案类型预测解码、片段解码、算术解码、计数解码和否定的头部。同时,采用了高性能的ELECTRA检查点,并基于这些检查点设计了二次预训练和绝对损失函数来提高模型性能。实验结果表明,所提出的模型在DROP和QUOREF语料库上的性能优于当前现有模型的最佳结果,这证明了所提出的MTQA模型具有较高的特征提取能力和较强的泛化能力。