Suppr超能文献

一种用于自动描述和分析早期超声扫描工作流程的机器学习方法。

A Machine Learning Method for Automated Description and Workflow Analysis of First Trimester Ultrasound Scans.

出版信息

IEEE Trans Med Imaging. 2023 May;42(5):1301-1313. doi: 10.1109/TMI.2022.3226274. Epub 2023 May 2.

Abstract

Obstetric ultrasound assessment of fetal anatomy in the first trimester of pregnancy is one of the less explored fields in obstetric sonography because of the paucity of guidelines on anatomical screening and availability of data. This paper, for the first time, examines imaging proficiency and practices of first trimester ultrasound scanning through analysis of full-length ultrasound video scans. Findings from this study provide insights to inform the development of more effective user-machine interfaces, of targeted assistive technologies, as well as improvements in workflow protocols for first trimester scanning. Specifically, this paper presents an automated framework to model operator clinical workflow from full-length routine first-trimester fetal ultrasound scan videos. The 2D+t convolutional neural network-based architecture proposed for video annotation incorporates transfer learning and spatio-temporal (2D+t) modelling to automatically partition an ultrasound video into semantically meaningful temporal segments based on the fetal anatomy detected in the video. The model results in a cross-validation A1 accuracy of 96.10% , F1=0.95 , precision =0.94 and recall =0.95 . Automated semantic partitioning of unlabelled video scans (n=250) achieves a high correlation with expert annotations ( ρ = 0.95, p=0.06 ). Clinical workflow patterns, operator skill and its variability can be derived from the resulting representation using the detected anatomy labels, order, and distribution. It is shown that nuchal translucency (NT) is the toughest standard plane to acquire and most operators struggle to localize high-quality frames. Furthermore, it is found that newly qualified operators spend 25.56% more time on key biometry tasks than experienced operators.

摘要

妊娠早期胎儿解剖结构的产科超声评估是产科超声领域中研究较少的领域之一,因为缺乏关于解剖筛查的指南和数据。本文首次通过分析全长超声视频扫描检查,研究了妊娠早期超声扫描的成像能力和实践。本研究的结果为开发更有效的人机界面、有针对性的辅助技术以及改进妊娠早期扫描工作流程协议提供了信息。具体来说,本文提出了一种自动框架,用于从全长常规妊娠早期胎儿超声扫描视频中模拟操作员的临床工作流程。该 2D+t 卷积神经网络架构用于视频注释,结合了迁移学习和时空(2D+t)建模,根据视频中检测到的胎儿解剖结构,自动将超声视频划分为语义上有意义的时间片段。该模型在交叉验证中的 A1 准确度为 96.10%,F1=0.95,精度=0.94,召回率=0.95。对未标记视频扫描(n=250)进行的自动语义分割与专家注释高度相关(ρ=0.95,p=0.06)。可以使用检测到的解剖学标签、顺序和分布,从所得表示中得出临床工作流程模式、操作员技能及其可变性。结果表明,颈项透明层(NT)是最难获取的标准平面,大多数操作员难以定位高质量的帧。此外,还发现新获得资质的操作员在关键生物测量任务上花费的时间比经验丰富的操作员多 25.56%。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验