• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

X 上分享的乳腺癌资源的可读性分析——对患者教育的启示及人工智能的潜力

Readability analysis of breast cancer resources shared on X-implications for patient education and the potential of AI.

作者信息

Wang Melanie J, Rastegar Aref, Kung Theodore A

机构信息

Department of Surgery, Section of Plastic Surgery, University of Michigan, Ann Arbor, MI, USA.

出版信息

Breast Cancer Res Treat. 2025 Aug 6. doi: 10.1007/s10549-025-07799-z.

DOI:10.1007/s10549-025-07799-z
PMID:40767984
Abstract

PURPOSE

Breast cancer remains a global public health burden. This study aimed to evaluate the readability of breast cancer articles shared on X (formerly Twitter) during Breast Cancer Awareness Month (October 2024), and it explores the possibility of using artificial intelligence (AI) to improve readability.

METHODS

We identified the top articles (n = 377) from posts containing #breastcancer on X during October 2024. Each article was analyzed using 9 established readability tests: Automated Readability Index (ARI), Coleman-Liau, Flesch-Kincaid, Flesch Reading Ease, FORCAST Readability Formula, Fry Graph, Gunning Fog Index, Raygor Readability Estimate, and Simple Measure of Gobbledygook (SMOG) Readability Formula. The study categorized sharing entities into five groups: academic medical centers, healthcare providers, government institutions, scientific journals, and all others. This comprehensive approach aimed to evaluate the readability of breast cancer articles across various sources during a critical awareness period of peak public engagement. A pilot study was simultaneously conducted using AI to improve readability. Statistical analysis was performed using SPSS.

RESULTS

A total of 377 articles shared by the following entities were analyzed: academic medical centers (35, 9.3%), healthcare providers (57, 15.2%), government institutions (21, 5.6%), scientific journals (63, 16.8%), and all others (199, 53.1%). Government institutions shared articles with the lowest average readability grade level (12.71 ± 0.79). Scientific journals (16.57 ± 0.09), healthcare providers (15.49 ± 0.32), all others (14.89 ± 0.17), and academic medical centers (13.56 ± 0.39) had higher average readability grade levels. Article types were also split into different categories: patient education (222, 58.9%), open-access journal (119, 31.5%), and full journal (37, 9.6%). Patient education articles (15.19 ± 0.17) had the lowest average readability grade level. Open-access and full journals had an average readability grade level of 16.65 ± 0.06 and 16.53 ± 0.10, respectively. The mean values for Flesch Reading Ease Score are patient education 38.14 ± 1.2, open-access journals 16.14 ± 0.89, full journals 17.69 ± 2.14. Of note, lower readability grade levels indicate easier-to-read text, while higher Flesch Reading Ease scores indicate more ease of reading. In a demonstration using AI to improve readability grade level of 5 sample articles, AI successfully lowered the average readability grade level from 12.58 ± 0.83 to 6.56 ± 0.28 (p < 0.001).

CONCLUSIONS

Our findings highlight a critical gap between the recommended and actual readability levels of breast cancer information shared on a popular social media platform. While some institutions are producing more accessible content, there is a pressing need for standardization and improvement across all sources. To address this issue, sources may consider leveraging AI technology as a potential tool for creating patient resources with appropriate readability levels.

摘要

目的

乳腺癌仍然是一个全球公共卫生负担。本研究旨在评估在乳腺癌宣传月(2024年10月)期间在X(原推特)上分享的乳腺癌文章的可读性,并探讨使用人工智能(AI)提高可读性的可能性。

方法

我们确定了2024年10月期间X上包含#乳腺癌的帖子中的热门文章(n = 377)。每篇文章都使用9种既定的可读性测试进行分析:自动可读性指数(ARI)、科尔曼-廖公式、弗莱施-金凯德公式、弗莱施阅读简易度、FORCAST可读性公式、弗莱阅读图、冈宁雾指数、雷戈尔可读性估计和胡言乱语简易度量表(SMOG)可读性公式。该研究将分享实体分为五组:学术医疗中心、医疗保健提供者、政府机构、科学期刊和其他所有机构。这种全面的方法旨在评估在公众参与度最高的关键宣传期间各种来源的乳腺癌文章的可读性。同时进行了一项使用人工智能提高可读性的试点研究。使用SPSS进行统计分析。

结果

共分析了以下实体分享的377篇文章:学术医疗中心(35篇,9.3%)、医疗保健提供者(57篇,15.2%)、政府机构(21篇,5.6%)、科学期刊(63篇,16.8%)和其他所有机构(199篇,53.1%)。政府机构分享的文章平均可读性年级水平最低(12.71±0.79)。科学期刊(16.57±0.09)、医疗保健提供者(15.49±0.32)、其他所有机构(14.89±0.17)和学术医疗中心(13.56±0.39)的平均可读性年级水平较高。文章类型也分为不同类别:患者教育(222篇,58.9%)、开放获取期刊(119篇,31.5%)和完整期刊(37篇,9.6%)。患者教育文章(15.19±0.17)的平均可读性年级水平最低。开放获取期刊和完整期刊的平均可读性年级水平分别为16.65±0.06和16.53±0.10。弗莱施阅读简易度得分的平均值为:患者教育38.14±1.2,开放获取期刊16.14±0.89,完整期刊17.69±2.14。值得注意的是,较低的可读性年级水平表明文本更易读,而较高的弗莱施阅读简易度得分表明阅读更轻松。在一项使用人工智能提高5篇样本文章可读性年级水平的演示中,人工智能成功地将平均可读性年级水平从12.58±0.83降至6.56±0.28(p < 0.001)。

结论

我们的研究结果突出了在一个热门社交媒体平台上分享的乳腺癌信息的推荐可读性水平与实际可读性水平之间的关键差距。虽然一些机构正在制作更易获取的内容,但所有来源都迫切需要标准化和改进。为了解决这个问题,各来源可考虑将人工智能技术作为创建具有适当可读性水平的患者资源的潜在工具。

相似文献

1
Readability analysis of breast cancer resources shared on X-implications for patient education and the potential of AI.X 上分享的乳腺癌资源的可读性分析——对患者教育的启示及人工智能的潜力
Breast Cancer Res Treat. 2025 Aug 6. doi: 10.1007/s10549-025-07799-z.
2
Can Artificial Intelligence Improve the Readability of Patient Education Materials?人工智能能否提高患者教育材料的可读性?
Clin Orthop Relat Res. 2023 Nov 1;481(11):2260-2267. doi: 10.1097/CORR.0000000000002668. Epub 2023 Apr 28.
3
Readability analysis as a tool for evaluating English proficiency in first-year medical students.可读性分析作为评估一年级医学生英语水平的一种工具。
BMC Med Educ. 2025 Jul 1;25(1):945. doi: 10.1186/s12909-025-07348-8.
4
American Academy of Orthopaedic Surgeons OrthoInfo provides more readable information regarding rotator cuff injury than ChatGPT.美国矫形外科医师学会的OrthoInfo提供了比ChatGPT更具可读性的关于肩袖损伤的信息。
J ISAKOS. 2025 Feb 12;12:100841. doi: 10.1016/j.jisako.2025.100841.
5
Evaluating the Readability and Quality of Online Patient Education Materials for Pediatric ACL Tears.评估小儿 ACL 撕裂伤在线患者教育资料的可读性和质量。
J Pediatr Orthop. 2023 Oct 1;43(9):549-554. doi: 10.1097/BPO.0000000000002490. Epub 2023 Aug 7.
6
Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study.使用大语言模型提高在线患者教育材料的可读性:横断面研究。
J Med Internet Res. 2025 Jun 4;27:e69955. doi: 10.2196/69955.
7
Readability of AI-Generated Patient Information Leaflets on Alzheimer's, Vascular Dementia, and Delirium.关于阿尔茨海默病、血管性痴呆和谵妄的人工智能生成的患者信息手册的可读性。
Cureus. 2025 Jun 6;17(6):e85463. doi: 10.7759/cureus.85463. eCollection 2025 Jun.
8
Readability and Quality of Online Information on Osteochondral Knee Injuries: An Objective Assessment.膝关节骨软骨损伤在线信息的可读性与质量:一项客观评估。
Cureus. 2025 May 29;17(5):e85014. doi: 10.7759/cureus.85014. eCollection 2025 May.
9
Eyes on the Text: Assessing Readability of Artificial Intelligence and Ophthalmologist Responses to Patient Surgery Queries.关注文本:评估人工智能和眼科医生对患者手术疑问的回复的可读性。
Ophthalmologica. 2025;248(3):149-159. doi: 10.1159/000544917. Epub 2025 Mar 10.
10
Readability of patient education materials for bariatric surgery.减重手术患者教育材料的可读性。
Surg Endosc. 2023 Aug;37(8):6519-6525. doi: 10.1007/s00464-023-10153-3. Epub 2023 Jun 5.

本文引用的文献

1
YouTube as a source of information for stroke rehabilitation: a cross-sectional analysis of quality and reliability of videos.YouTube作为中风康复信息来源:视频质量与可靠性的横断面分析
Rheumatol Int. 2025 Mar 22;45(4):77. doi: 10.1007/s00296-025-05832-4.
2
Readability of written information for patients across 30 years: A systematic review of systematic reviews.30年来患者书面信息的可读性:系统评价的系统综述
Patient Educ Couns. 2025 Jun;135:108656. doi: 10.1016/j.pec.2025.108656. Epub 2025 Jan 31.
3
Evaluating the readability, quality and reliability of online patient education materials on chronic low back pain.
评估在线慢性下背痛患者教育资料的可读性、质量和可靠性。
Natl Med J India. 2024 May-Jun;37(3):124-130. doi: 10.25259/NMJI_327_2022.
4
Assessing the quality and reliability of YouTube videos as a source of information on inflammatory back pain.评估YouTube视频作为炎性背痛信息来源的质量和可靠性。
PeerJ. 2024 Apr 11;12:e17215. doi: 10.7717/peerj.17215. eCollection 2024.
5
Artificial Intelligence in Plastic Surgery: ChatGPT as a Tool to Address Disparities in Health Literacy.整形手术中的人工智能:ChatGPT作为解决健康素养差异的工具
Plast Reconstr Surg. 2024 Jun 1;153(6):1232e-1234e. doi: 10.1097/PRS.0000000000011202. Epub 2023 Nov 14.
6
Readability Analysis of Online Breast Cancer Surgery Patient Education Materials from National Cancer Institute-Designated Cancer Centers Compared with Top Internet Search Results.美国国家癌症研究所指定癌症中心与顶级互联网搜索结果的在线乳腺癌手术患者教育资料可读性分析比较。
Ann Surg Oncol. 2023 Dec;30(13):8061-8066. doi: 10.1245/s10434-023-14279-5. Epub 2023 Sep 14.
7
The Role of Social Media in Breast Cancer Care and Survivorship: A Narrative Review.社交媒体在乳腺癌护理与 survivorship 中的作用:一项叙述性综述。 (注:“survivorship”常见释义为“生存、幸存者身份等”,这里结合语境可能是指乳腺癌患者存活后的相关情况等,因未结合完整文献较难精准翻译这个词,暂保留英文供参考)
Breast Care (Basel). 2023 Apr;18(3):193-199. doi: 10.1159/000531136. Epub 2023 May 19.
8
Readability assessment of the British Association of Dermatologists' patient information leaflets.英国皮肤科医师协会患者信息传单的可读性评估。
Clin Exp Dermatol. 2022 Apr;47(4):684-691. doi: 10.1111/ced.15012. Epub 2021 Dec 15.
9
Are Health Literacy and Patient Activation Related to Health Outcomes in Breast Cancer Patients?健康素养和患者激活与乳腺癌患者的健康结果相关吗?
Health Lit Res Pract. 2021 Jul;5(3):e171-e178. doi: 10.3928/24748307-20210524-02. Epub 2021 Jul 15.
10
Social networks applied to Dengue, H1N1, and Zika epidemics: An integrative literature review.社交网络在登革热、H1N1 和寨卡疫情中的应用:综合文献回顾。
Work. 2020;67(3):721-732. doi: 10.3233/WOR-203321.