• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过深度卷积神经网络自动定位口腔面部标志,以分析麻醉状态下的气道形态。

Automated location of orofacial landmarks to characterize airway morphology in anaesthesia via deep convolutional neural networks.

机构信息

Basque Center for Applied Mathematics (BCAM) - Bilbao, Basque Country, Spain.

Basque Center for Applied Mathematics (BCAM) - Bilbao, Basque Country, Spain; IE University, School of Science and Technology - Madrid, Madrid, Spain.

出版信息

Comput Methods Programs Biomed. 2023 Apr;232:107428. doi: 10.1016/j.cmpb.2023.107428. Epub 2023 Feb 25.

DOI:10.1016/j.cmpb.2023.107428
PMID:36870169
Abstract

BACKGROUND

A reliable anticipation of a difficult airway may notably enhance safety during anaesthesia. In current practice, clinicians use bedside screenings by manual measurements of patients' morphology.

OBJECTIVE

To develop and evaluate algorithms for the automated extraction of orofacial landmarks, which characterize airway morphology.

METHODS

We defined 27 frontal + 13 lateral landmarks. We collected n=317 pairs of pre-surgery photos from patients undergoing general anaesthesia (140 females, 177 males). As ground truth reference for supervised learning, landmarks were independently annotated by two anaesthesiologists. We trained two ad-hoc deep convolutional neural network architectures based on InceptionResNetV2 (IRNet) and MobileNetV2 (MNet), to predict simultaneously: (a) whether each landmark is visible or not (occluded, out of frame), (b) its 2D-coordinates (x,y). We implemented successive stages of transfer learning, combined with data augmentation. We added custom top layers on top of these networks, whose weights were fully tuned for our application. Performance in landmark extraction was evaluated by 10-fold cross-validation (CV) and compared against 5 state-of-the-art deformable models.

RESULTS

With annotators' consensus as the 'gold standard', our IRNet-based network performed comparably to humans in the frontal view: median CV loss L=1.277·10, inter-quartile range (IQR) [1.001, 1.660]; versus median 1.360, IQR [1.172, 1.651], and median 1.352, IQR [1.172, 1.619], for each annotator against consensus, respectively. MNet yielded slightly worse results: median 1.471, IQR [1.139, 1.982]. In the lateral view, both networks attained performances statistically poorer than humans: median CV loss L=2.141·10, IQR [1.676, 2.915], and median 2.611, IQR [1.898, 3.535], respectively; versus median 1.507, IQR [1.188, 1.988], and median 1.442, IQR [1.147, 2.010] for both annotators. However, standardized effect sizes in CV loss were small: 0.0322 and 0.0235 (non-significant) for IRNet, 0.1431 and 0.1518 (p<0.05) for MNet; therefore quantitatively similar to humans. The best performing state-of-the-art model (a deformable regularized Supervised Descent Method, SDM) behaved comparably to our DCNNs in the frontal scenario, but notoriously worse in the lateral view.

CONCLUSIONS

We successfully trained two DCNN models for the recognition of 27 + 13 orofacial landmarks pertaining to the airway. Using transfer learning and data augmentation, they were able to generalize without overfitting, reaching expert-like performances in CV. Our IRNet-based methodology achieved a satisfactory identification and location of landmarks: particularly in the frontal view, at the level of anaesthesiologists. In the lateral view, its performance decayed, although with a non-significant effect size. Independent authors had also reported lower lateral performances; as certain landmarks may not be clear salient points, even for a trained human eye.

摘要

背景

可靠地预测困难气道可以显著提高麻醉期间的安全性。在当前的实践中,临床医生使用床边筛查通过对患者形态的手动测量来进行。

目的

开发和评估用于自动提取口面标志的算法,这些标志用于描述气道形态。

方法

我们定义了 27 个正面+13 个侧面标志。我们从接受全身麻醉的患者中收集了 n=317 对术前照片(女性 140 例,男性 177 例)。作为监督学习的地面实况参考,标志由两名麻醉师独立进行注释。我们基于 InceptionResNetV2(IRNet)和 MobileNetV2(MNet)训练了两个特定用途的深度卷积神经网络架构,以同时预测:(a)每个标志是否可见或不可见(遮挡,超出框架),(b)其 2D 坐标(x,y)。我们实施了连续的转移学习阶段,结合了数据增强。我们在这些网络之上添加了自定义的顶层,其权重完全针对我们的应用程序进行了调整。通过 10 折交叉验证(CV)评估了地标提取的性能,并与 5 种最先进的可变形模型进行了比较。

结果

以注释者的共识作为“黄金标准”,我们的基于 IRNet 的网络在正面视图中的表现与人类相当:中位数 CV 损失 L=1.277·10,四分位距(IQR)[1.001,1.660];分别为每个注释者相对于共识的中位数 1.360,IQR [1.172,1.651]和中位数 1.352,IQR [1.172,1.619]。MNet 的结果略差:中位数 1.471,IQR [1.139,1.982]。在侧面视图中,两个网络的表现均明显不如人类:中位数 CV 损失 L=2.141·10,IQR [1.676,2.915];分别为中位数 2.611,IQR [1.898,3.535]和中位数 1.507,IQR [1.188,1.988]。然而,CV 损失中的标准化效应大小很小:IRNet 为 0.0322 和 0.0235(不显著),MNet 为 0.1431 和 0.1518(p<0.05);因此与人类相似。表现最好的最先进模型(可变形正则化的监督下降方法,SDM)在正面场景中的表现与我们的 DCNN 相似,但在侧面视图中表现明显较差。

结论

我们成功训练了两个用于识别与气道相关的 27+13 个口面标志的 DCNN 模型。通过使用迁移学习和数据增强,它们能够在不发生过拟合的情况下进行泛化,在 CV 中达到了专家级的表现。我们基于 IRNet 的方法实现了地标识别和定位的令人满意的结果:特别是在正面视图中,与麻醉师的水平相当。在侧面视图中,其性能下降,尽管效果大小不显著。独立作者也报告了较低的侧面性能;因为某些地标可能不是明显的显著点,即使对于受过训练的人眼也是如此。

相似文献

1
Automated location of orofacial landmarks to characterize airway morphology in anaesthesia via deep convolutional neural networks.通过深度卷积神经网络自动定位口腔面部标志,以分析麻醉状态下的气道形态。
Comput Methods Programs Biomed. 2023 Apr;232:107428. doi: 10.1016/j.cmpb.2023.107428. Epub 2023 Feb 25.
2
Reliable prediction of difficult airway for tracheal intubation from patient preoperative photographs by machine learning methods.通过机器学习方法从患者术前照片中可靠预测气管插管的困难气道。
Comput Methods Programs Biomed. 2024 May;248:108118. doi: 10.1016/j.cmpb.2024.108118. Epub 2024 Mar 12.
3
Automated pectoral muscle identification on MLO-view mammograms: Comparison of deep neural network to conventional computer vision.基于 MLO 视图的乳腺钼靶片中自动胸大肌识别:深度神经网络与传统计算机视觉的比较。
Med Phys. 2019 May;46(5):2103-2114. doi: 10.1002/mp.13451. Epub 2019 Mar 12.
4
[Automated cephalometric landmark identification and location based on convolutional neural network].基于卷积神经网络的自动头影测量标志点识别与定位
Zhonghua Kou Qiang Yi Xue Za Zhi. 2023 Dec 9;58(12):1249-1256. doi: 10.3760/cma.j.cn112144-20230829-00118.
5
Application of Deep Convolutional Neural Networks in the Diagnosis of Osteoporosis.深度学习卷积神经网络在骨质疏松症诊断中的应用。
Sensors (Basel). 2022 Oct 26;22(21):8189. doi: 10.3390/s22218189.
6
Fully automated identification of cephalometric landmarks for upper airway assessment using cascaded convolutional neural networks.使用级联卷积神经网络对上气道评估的头影测量标志进行全自动识别。
Eur J Orthod. 2022 Jan 25;44(1):66-77. doi: 10.1093/ejo/cjab054.
7
Automated landmark identification for diagnosis of the deformity using a cascade convolutional neural network (FlatNet) on weight-bearing lateral radiographs of the foot.基于足负重侧位 X 线片的级联卷积神经网络(FlatNet)自动标志点识别诊断足畸形。
Comput Biol Med. 2022 Sep;148:105914. doi: 10.1016/j.compbiomed.2022.105914. Epub 2022 Aug 7.
8
Accuracy of automated identification of lateral cephalometric landmarks using cascade convolutional neural networks on lateral cephalograms from nationwide multi-centres.使用级联卷积神经网络对来自全国多中心的头颅侧位片进行头颅侧位标志点自动识别的准确性
Orthod Craniofac Res. 2021 Dec;24 Suppl 2:59-67. doi: 10.1111/ocr.12493. Epub 2021 Jun 27.
9
[Deep learning-assisted construction of three-demensional facial midsagittal plane].[深度学习辅助构建三维面部正中矢状平面]
Beijing Da Xue Xue Bao Yi Xue Ban. 2022 Feb 18;54(1):134-139. doi: 10.19723/j.issn.1671-167X.2022.01.021.
10
Deep learning prediction of sex on chest radiographs: a potential contributor to biased algorithms.深度学习预测胸部 X 光片上的性别:导致算法产生偏差的潜在因素。
Emerg Radiol. 2022 Apr;29(2):365-370. doi: 10.1007/s10140-022-02019-3. Epub 2022 Jan 10.

引用本文的文献

1
A Survey Study of the 3D Facial Landmark Detection Techniques Used as a Screening Tool for Diagnosis of the Obstructive Sleep Apnea Syndrome.用于阻塞性睡眠呼吸暂停综合征诊断的筛查工具的 3D 面部地标检测技术的调查研究。
Adv Respir Med. 2024 Aug 14;92(4):318-328. doi: 10.3390/arm92040030.