• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于信息性句子的自动计算机科学领域多项选择题生成

Automatic computer science domain multiple-choice questions generation based on informative sentences.

作者信息

Maheen Farah, Asif Muhammad, Ahmad Haseeb, Ahmad Shahbaz, Alturise Fahad, Asiry Othman, Ghadi Yazeed Yasin

机构信息

Department of Computer Science, National Textile University, Faisalabad, Pakistan.

Department of Computer, College of Science and Arts in Ar Rass, Qassim University, Ar Rass, Qassim, Saudi Arabia.

出版信息

PeerJ Comput Sci. 2022 Aug 16;8:e1010. doi: 10.7717/peerj-cs.1010. eCollection 2022.

DOI:10.7717/peerj-cs.1010
PMID:36091982
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9454961/
Abstract

Students require continuous feedback for effective learning. Multiple choice questions (MCQs) are extensively used among various assessment methods to provide such feedback. However, manual MCQ generation is a tedious task that requires significant effort, time, and domain knowledge. Therefore, a system must be present that can automatically generate MCQs from the given text. The automatic generation of MCQs can be carried out by following three sequential steps: extracting informative sentences from the textual data, identifying the key, and determining distractors. The dataset comprising of various topics from the 9th and 11th-grade computer science course books are used in this work. Moreover, TF-IDF, Jaccard similarity, quality phrase mining, K-means, and bidirectional encoder representation from transformers techniques are utilized for automatic MCQs generation. Domain experts validated the generated MCQs with 83%, 77%, and 80% accuracy, key generation, and distractor generation, respectively. The overall MCQ generation achieved 80% accuracy through this system by the experts. Finally, a desktop app was developed that takes the contents in textual form as input, processes it at the backend, and visualizes the generated MCQs on the interface. The presented solution may help teachers, students, and other stakeholders with automatic MCQ generation.

摘要

学生需要持续的反馈以实现有效的学习。在各种评估方法中,多项选择题(MCQs)被广泛用于提供此类反馈。然而,手动生成多项选择题是一项繁琐的任务,需要大量的精力、时间和领域知识。因此,必须有一个系统能够从给定文本中自动生成多项选择题。多项选择题的自动生成可以通过以下三个连续步骤进行:从文本数据中提取信息性句子、识别关键内容并确定干扰项。这项工作使用了包含来自九年级和十一年级计算机科学课程书籍的各种主题的数据集。此外,还利用了词频 - 逆文档频率(TF-IDF)、杰卡德相似度、高质量短语挖掘、K均值算法以及来自Transformer技术的双向编码器表示来自动生成多项选择题。领域专家分别以83%、77%和80%的准确率验证了生成的多项选择题、关键内容生成和干扰项生成。通过该系统,专家们实现了整体多项选择题生成80%的准确率。最后,开发了一个桌面应用程序,它以文本形式接收内容作为输入,在后端进行处理,并在界面上可视化生成的多项选择题。所提出的解决方案可能会帮助教师、学生和其他利益相关者进行多项选择题的自动生成。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/d7c669f885c0/peerj-cs-08-1010-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/5557447b03d0/peerj-cs-08-1010-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/393a64006537/peerj-cs-08-1010-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/60a9253041b8/peerj-cs-08-1010-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/9cc48626a318/peerj-cs-08-1010-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/82737c0e5171/peerj-cs-08-1010-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/383fda5a0ef0/peerj-cs-08-1010-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/a29d6c7da454/peerj-cs-08-1010-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/2e617254ee1b/peerj-cs-08-1010-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/507b6ed71b6a/peerj-cs-08-1010-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/bf5c7eb4c61d/peerj-cs-08-1010-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/68e38589f507/peerj-cs-08-1010-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/dda506f78f4a/peerj-cs-08-1010-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/d7c669f885c0/peerj-cs-08-1010-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/5557447b03d0/peerj-cs-08-1010-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/393a64006537/peerj-cs-08-1010-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/60a9253041b8/peerj-cs-08-1010-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/9cc48626a318/peerj-cs-08-1010-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/82737c0e5171/peerj-cs-08-1010-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/383fda5a0ef0/peerj-cs-08-1010-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/a29d6c7da454/peerj-cs-08-1010-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/2e617254ee1b/peerj-cs-08-1010-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/507b6ed71b6a/peerj-cs-08-1010-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/bf5c7eb4c61d/peerj-cs-08-1010-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/68e38589f507/peerj-cs-08-1010-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/dda506f78f4a/peerj-cs-08-1010-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8e7/9454961/d7c669f885c0/peerj-cs-08-1010-g013.jpg

相似文献

1
Automatic computer science domain multiple-choice questions generation based on informative sentences.基于信息性句子的自动计算机科学领域多项选择题生成
PeerJ Comput Sci. 2022 Aug 16;8:e1010. doi: 10.7717/peerj-cs.1010. eCollection 2022.
2
A novel student-led approach to multiple-choice question generation and online database creation, with targeted clinician input.一种由学生主导的新颖方法,用于生成多项选择题并创建在线数据库,同时有针对性地征求临床医生的意见。
Teach Learn Med. 2015;27(2):182-8. doi: 10.1080/10401334.2015.1011651.
3
Medical students create multiple-choice questions for learning in pathology education: a pilot study.医学生在病理学教育中创建多选题进行学习:一项试点研究。
BMC Med Educ. 2018 Aug 22;18(1):201. doi: 10.1186/s12909-018-1312-1.
4
PeerWise and Pathology: Discontinuing a teaching innovation that did not achieve its potential.同伴互评与病理学:终止一项未发挥其潜力的教学创新。
MedEdPublish (2016). 2020 Oct 14;9:27. doi: 10.15694/mep.2020.000027.2. eCollection 2020.
5
Adapting Bidirectional Encoder Representations from Transformers (BERT) to Assess Clinical Semantic Textual Similarity: Algorithm Development and Validation Study.改编来自Transformer的双向编码器表征(BERT)以评估临床语义文本相似性:算法开发与验证研究。
JMIR Med Inform. 2021 Feb 3;9(2):e22795. doi: 10.2196/22795.
6
Using Automatic Item Generation to Improve the Quality of MCQ Distractors.使用自动试题生成来提高多项选择题干扰项的质量。
Teach Learn Med. 2016;28(2):166-73. doi: 10.1080/10401334.2016.1146608.
7
The Effect of a One-Day Workshop on the Quality of Framing Multiple Choice Questions in Physiology in a Medical College in India.一日研讨会对印度一所医学院生理学多项选择题编制质量的影响。
Cureus. 2023 Aug 24;15(8):e44049. doi: 10.7759/cureus.44049. eCollection 2023 Aug.
8
Item analysis of multiple choice questions: A quality assurance test for an assessment tool.多项选择题的项目分析:一种评估工具的质量保证测试。
Med J Armed Forces India. 2021 Feb;77(Suppl 1):S85-S89. doi: 10.1016/j.mjafi.2020.11.007. Epub 2021 Feb 2.
9
Identifying the Perceived Severity of Patient-Generated Telemedical Queries Regarding COVID: Developing and Evaluating a Transfer Learning-Based Solution.识别患者生成的关于新冠病毒的远程医疗查询的感知严重程度:开发和评估基于迁移学习的解决方案。
JMIR Med Inform. 2022 Sep 2;10(9):e37770. doi: 10.2196/37770.
10
Case-based MCQ generator: A custom ChatGPT based on published prompts in the literature for automatic item generation.基于病例的多项选择题生成器:一种自定义的 ChatGPT,基于文献中发布的提示进行自动项目生成。
Med Teach. 2024 Aug;46(8):1018-1020. doi: 10.1080/0142159X.2024.2314723. Epub 2024 Feb 10.

引用本文的文献

1
Automatic distractor generation in multiple-choice questions: a systematic literature review.多项选择题中自动干扰项生成:一项系统文献综述
PeerJ Comput Sci. 2024 Nov 13;10:e2441. doi: 10.7717/peerj-cs.2441. eCollection 2024.
2
Beyond top-k: knowledge reasoning for multi-answer temporal questions based on revalidation framework.超越前 k 项:基于重新验证框架的多答案时间问题知识推理
PeerJ Comput Sci. 2023 Dec 8;9:e1725. doi: 10.7717/peerj-cs.1725. eCollection 2023.
3
ChatGPT 3.5 fails to write appropriate multiple choice practice exam questions.

本文引用的文献

1
Developing and evaluating cybersecurity competencies for students in computing programs.为计算机专业的学生培养和评估网络安全能力。
PeerJ Comput Sci. 2022 Jan 17;8:e827. doi: 10.7717/peerj-cs.827. eCollection 2022.
2
FNG-IE: an improved graph-based method for keyword extraction from scholarly big-data.FNG-IE:一种用于从学术大数据中提取关键词的改进的基于图的方法。
PeerJ Comput Sci. 2021 Mar 11;7:e389. doi: 10.7717/peerj-cs.389. eCollection 2021.
3
Evaluation of automatically generated English vocabulary questions.自动生成的英语词汇问题评估。
ChatGPT 3.5无法编写合适的多项选择题练习考试题目。
Acad Pathol. 2023 Dec 19;11(1):100099. doi: 10.1016/j.acpath.2023.100099. eCollection 2024 Jan-Mar.
Res Pract Technol Enhanc Learn. 2017;12(1):11. doi: 10.1186/s41039-017-0051-y. Epub 2017 Mar 7.
4
Mining Quality Phrases from Massive Text Corpora.从海量文本语料库中挖掘高质量短语。
Proc ACM SIGMOD Int Conf Manag Data. 2015 May-Jun;2015:1729-1744. doi: 10.1145/2723372.2751523.