Mijatović Antonija, Žuljević Marija Franka, Ursić Luka, Bralić Nensi, Vuković Miro, Roguljić Marija, Marušić Ana
Department of Research in Biomedicine and Health, Center for Evidence-Based Medicine, School of Medicine, University of Split, Split, Croatia.
Department of Medical Humanities, Center for Evidence-Based Medicine, School of Medicine, University of Split, Split, Croatia.
Res Integr Peer Rev. 2025 Aug 8;10(1):14. doi: 10.1186/s41073-025-00172-0.
Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images.
We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications.
A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8-13), and made 2 mistakes (IQR = 1-4), whereas the researchers identified a median of 11 duplications (IQR = 8-14) and made 1 mistake (IQR = 1-3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff's δ = 0.126) or the number of mistakes (p = .731, Cliff's δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen's d = 0.72; p < .005 and Cohen's d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085).
Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.
对数字图像进行不当处理会对研究的完整性构成重大风险。在此,我们评估了学生和研究人员检测生物医学图像中图像重复情况的能力。
我们开展了一项纸笔调查,参与对象包括在学习过程中接触过研究论文图像的医学生以及在职研究人员。我们要求他们识别蛋白质印迹、细胞培养和组织切片图像中的重复情况,并根据正确和错误检测到的重复数量来评估他们的表现。
在2023/2024学年,共有831名学生和26名研究人员完成了调查。在21个独特图像部分的34处重复中,学生正确识别的重复中位数为10处(四分位间距[IQR]=8 - 13),犯了2个错误(IQR=1 - 4),而研究人员识别的重复中位数为11处(IQR=8 - 14),犯了1个错误(IQR=1 - 3)。两组在正确检测到的重复数量(p = 0.271,克利夫δ=0.126)或错误数量(p = 0.731,克利夫δ=0.039)方面均无显著差异。学生和研究人员在蛋白质印迹图像中识别出的重复百分比均高于细胞或组织图像(分别为p < 0.005,科恩d=0.72;p < 0.005,科恩d=1.01)。对于学生而言,性别对表现的预测作用较弱,女性参与者发现的重复略多(p < 0.005,克利夫δ=0.158),但犯的错误更多(p < 0.005,克利夫δ=0.239)。学年对学生表现没有显著影响(p = 0.209;克利夫δ=0.085)。
尽管专业知识存在差异,但学生和研究人员在检测数字图像中的重复方面都表现出有限的能力。自动筛选工具可能更有助于检测数字图像操纵情况,并且研究人员在科学出版物中应就如何准备数字图像获得明确的指导。