Chandasir Abdullah B, Skariah Justin T, Abes Justin D, Patel Akshar, Lomis Mitchell J, Chandasir Noora S, Owens Brett D, Parada Stephen A
Orthopedic Surgery, Augusta University Medical College of Georgia, Augusta, USA.
Orthopedic Surgery, Philadelphia College of Osteopathic Medicine, Suwanee, USA.
Cureus. 2025 Feb 4;17(2):e78518. doi: 10.7759/cureus.78518. eCollection 2025 Feb.
This study aims to evaluate video quality, reliability, actionability, and understandability differences based on length, popularity, and source credentials (physician versus non-physician). The hypothesis suggests that current videos are of low quality and limited usefulness to patients, highlighting significant disparities based on the credentials of the video source.
The phrase "acromioclavicular joint separation" was searched on YouTube. The first 100 videos that populated were selected. Of those 100, 45 were excluded based on pre-existing criteria. Two reviewers watched and graded the included videos using four established, additive algorithmic grading scales. Grades for all included videos were analyzed using R software version 4.2.3.
The mean Journal of the American Medical Association (JAMA) score was 2.32 (standard deviation (SD) = 0.74), with patient-made videos having a significantly lower reliability score (p = 0.008). The mean Patient Education Materials Assessment Tool (PEMAT) understandability and actionability scores were 59.78% (SD = 15.28%) and 67.55% (SD = 15.28%) respectively. PEMAT actionability scores were positively correlated to views (p = 0.002). The average DISCERN score was 2.51 (SD = 0.70); longer videos were correlated with higher DISCERN scores (p = 0.047).
Analysis indicated that there were significant differences in reliability and understandability between video source types. Additionally, there was no correlation between quality and/or reliability and views, indicating that the YouTube algorithm is not an effective indicator of the quality of videos.
本研究旨在基于视频长度、受欢迎程度和来源凭证(医生与非医生)评估视频质量、可靠性、可操作性和可理解性差异。该假设表明,当前视频质量较低,对患者的有用性有限,突出了基于视频来源凭证的显著差异。
在YouTube上搜索短语“肩锁关节分离”。选择出现的前100个视频。在这100个视频中,根据预先设定的标准排除了45个。两名评审员使用四个既定的、累加算法评分量表观看并对纳入的视频进行评分。使用R软件4.2.3版本对所有纳入视频的评分进行分析。
美国医学会杂志(JAMA)平均评分为2.32(标准差(SD)=0.74),患者制作的视频可靠性得分显著较低(p=0.008)。患者教育材料评估工具(PEMAT)的平均可理解性和可操作性得分分别为59.78%(SD=15.28%)和67.55%(SD=15.28%)。PEMAT可操作性得分与观看次数呈正相关(p=0.002)。平均辨别力评分为2.51(SD=0.70);较长的视频与较高的辨别力评分相关(p=0.047)。
分析表明,视频来源类型之间在可靠性和可理解性方面存在显著差异。此外,质量和/或可靠性与观看次数之间没有相关性,表明YouTube算法不是视频质量的有效指标。