Department of Dermatology, Duke University School of Medicine, Durham, North Carolina.
Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, North Carolina.
JAMA Dermatol. 2022 Oct 1;158(10):1183-1186. doi: 10.1001/jamadermatol.2022.2815.
Patient-submitted images vary considerably in quality and usefulness. Studies that characterize patient-submitted images in a real-life setting are lacking.
To evaluate the quality and perceived usefulness of patient-submitted images as determined by dermatologists and characterize agreement of their responses.
DESIGN, SETTING, AND PARTICIPANTS: This survey study included patient images submitted to the Department of Dermatology at Duke University (Durham, North Carolina) between August 1, 2018, and December 31, 2019. From a total pool of 1200 images, 10 dermatologists evaluated 200 or 400 images each, with every image being evaluated by 3 dermatologists. Data analysis occurred during the year leading up to the article being written.
The primary outcomes were the responses to 2 questions and were analyzed using frequency counts and interrater agreement (Fleiss κ) to assess image quality and perceived usefulness. We performed a random-effects logistic regression model to investigate factors associated with evaluators' decision-making comfort. We hypothesized that most images would be of low quality and perceived usefulness, and that interrater agreement would be poor.
A total of 259 of 2915 patient-submitted images (8.9%) did not depict a skin condition at all. The final analysis comprised 3600 unique image evaluations. Dermatologist evaluators indicated that 1985 images (55.1%) were useful for medical decision-making and 2239 (62.2%) were of sufficient quality. Interrater agreement for a given image's diagnostic categorization was fair to substantial (κ range, 0.36-0.64), while agreement on image quality (κ range, 0.35-0.47) and perceived usefulness (κ range, 0.29-0.38) were fair to moderate. Senior faculty had higher odds of feeling comfortable with medical decision-making than junior faculty (odds ratio [OR], 3.68; 95% CI, 2.9-4.66; P < .001) and residents (OR, 5.55; 95% CI, 4.38-7.04; P < .001). Images depicting wounds (OR, 1.75; 95% CI, 1.18-2.58; P = .01) compared with inflammatory skin conditions and that were in focus (OR, 5.56; 95% CI, 4.63-6.67; P < .001) had higher odds of being considered useful for decision-making.
In this survey study including 10 dermatologists, a slight majority of patient-submitted images were judged to be of adequate quality and perceived usefulness. Fair agreement between dermatologists was found regarding image quality and perceived usefulness, suggesting that store-and-forward teledermatology initiatives should consider a physician's individual experiences and comfort level. The study results suggest that images are most likely to be useful when they are in focus and reviewed by experienced attending physicians for wound surveillance, but dermatologists may be burdened by irrelevant or unsuitable images.
患者提交的图像在质量和有用性上差异很大。缺乏描述现实生活中患者提交图像特征的研究。
评估皮肤科医生对患者提交图像的质量和感知有用性的评估,并描述他们的反应的一致性。
设计、设置和参与者:这项调查研究包括 2018 年 8 月 1 日至 2019 年 12 月 31 日期间向杜克大学皮肤科提交的患者图像。在总共 1200 张图像中,10 名皮肤科医生每人评估了 200 或 400 张图像,每张图像由 3 名皮肤科医生进行评估。数据分析发生在撰写文章之前的一年中。
主要结果是对 2 个问题的回答,使用频率计数和组内一致性(Fleiss κ)进行分析,以评估图像质量和感知有用性。我们进行了随机效应逻辑回归模型分析,以研究与评估者决策舒适度相关的因素。我们假设大多数图像的质量和有用性都较低,并且组内一致性较差。
在 2915 张患者提交的图像中,共有 259 张(8.9%)根本没有显示皮肤状况。最终分析包括 3600 张独特的图像评估。皮肤科医生评估者表示,1985 张(55.1%)图像对医疗决策有用,2239 张(62.2%)图像质量足够好。给定图像的诊断分类的组内一致性为中等至良好(κ 范围,0.36-0.64),而图像质量(κ 范围,0.35-0.47)和感知有用性(κ 范围,0.29-0.38)的一致性为中等至良好。资深教师在医疗决策方面感到舒适的可能性高于初级教师(优势比[OR],3.68;95%CI,2.9-4.66;P < .001)和住院医师(OR,5.55;95%CI,4.38-7.04;P < .001)。与炎症性皮肤状况相比,显示伤口的图像(OR,1.75;95%CI,1.18-2.58;P = .01)和聚焦的图像(OR,5.56;95%CI,4.63-6.67;P < .001)更有可能被认为对决策有用。
在这项包括 10 名皮肤科医生的调查研究中,大多数患者提交的图像被认为质量和感知有用性适中。皮肤科医生之间在图像质量和感知有用性方面存在公平的一致性,这表明存储和转发远程皮肤病学计划应考虑医生的个人经验和舒适度。研究结果表明,当图像聚焦且由经验丰富的主治医生进行伤口监测时,图像最有可能有用,但皮肤科医生可能会因不相关或不合适的图像而负担过重。