Guidolin Keegan, Lin Justin, Zorigtbaatar Anudari, Nadeem Minahil, Ibrahim Tarek, Neilson Zdenka, Kim Kyung Young Peter, Rajendran Luckshi, Chadi Sami, Quereshy Fayez
Department of Surgery, University of Toronto, Toronto, ON, Canada.
Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada.
Ann Surg. 2022 Nov 1;276(5):e275-e283. doi: 10.1097/SLA.0000000000005521. Epub 2022 Jul 8.
The objective of this study was to assess the quality and accuracy of visual abstracts published in academic surgical journals.
Visual abstracts are commonly used to disseminate medical research findings. They distill the key messages of a research article, presenting them graphically in an engaging manner so that potential readers can decide whether to read the complete manuscript.
We developed the Visual Abstract Assessment Tool based upon published guidelines. Seven reviewers underwent iterative training to apply the tool. We collected visual abstracts published by 25 surgical journals from January 2017 to April 2021; those corresponding to systematic reviews without meta-analysis, conference abstracts, narrative reviews, video abstracts, or nonclinical research were excluded. Included visual abstracts were scored on accuracy (as compared with written abstracts) and design, and were given a "first impression" score.
Across 25 surgical journals 1325 visual abstracts were scored. We found accuracy deficits in the reporting of study design (35.8%), appropriate icon use (49%), and sample size reporting (69.2%), and design deficits in element alignment (54.8%) and symmetry (36.1%). Overall scores ranged from 9 to 14 (out of 15), accuracy scores from 4 to 8 (out of 8), and design scores from 3 to 7 (out of 7). No predictors of visual abstract score were identified.
Visual abstracts vary widely in quality. As visual abstracts become integrated with the traditional components of scientific publication, they must be held to similarly high standards. We propose a checklist to be used by authors and journals to standardize the quality of visual abstracts.
本研究旨在评估外科学术期刊发表的可视化摘要的质量和准确性。
可视化摘要常用于传播医学研究成果。它们提炼研究文章的关键信息,以引人入胜的方式将其以图形呈现,以便潜在读者能够决定是否阅读完整的手稿。
我们根据已发表的指南开发了可视化摘要评估工具。七名评审员接受了迭代培训以应用该工具。我们收集了2017年1月至2021年4月25种外科期刊发表的可视化摘要;排除了那些对应于无荟萃分析的系统评价、会议摘要、叙述性综述、视频摘要或非临床研究的内容。纳入的可视化摘要在准确性(与书面摘要相比)和设计方面进行评分,并给出“第一印象”分数。
对25种外科期刊的1325篇可视化摘要进行了评分。我们发现研究设计报告(35.8%)、图标正确使用(49%)和样本量报告(69.2%)存在准确性缺陷,元素对齐(54.8%)和对称性(36.1%)存在设计缺陷。总体分数范围为9至14分(满分15分),准确性分数为4至8分(满分8分),设计分数为3至7分(满分7分)。未发现可视化摘要分数的预测因素。
可视化摘要的质量差异很大。随着可视化摘要与科学出版的传统组成部分相结合,它们必须符合同样高的标准。我们提出了一份清单,供作者和期刊使用,以规范可视化摘要的质量。