Suppr超能文献

基于深度神经网络的鼻咽拭子采样视觉反馈系统。

Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling.

机构信息

Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea.

School of Mechanical Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea.

出版信息

Sensors (Basel). 2023 Oct 13;23(20):8443. doi: 10.3390/s23208443.

Abstract

During the 2019 coronavirus disease pandemic, robotic-based systems for swab sampling were developed to reduce burdens on healthcare workers and their risk of infection. Teleoperated sampling systems are especially appreciated as they fundamentally prevent contact with suspected COVID-19 patients. However, the limited field of view of the installed cameras prevents the operator from recognizing the position and deformation of the swab inserted into the nasal cavity, which highly decreases the operating performance. To overcome this limitation, this study proposes a visual feedback system that monitors and reconstructs the shape of an NP swab using augmented reality (AR). The sampling device contained three load cells and measured the interaction force applied to the swab, while the shape information was captured using a motion-tracking program. These datasets were used to train a one-dimensional convolution neural network (1DCNN) model, which estimated the coordinates of three feature points of the swab in 2D X-Y plane. Based on these points, the virtual shape of the swab, reflecting the curvature of the actual one, was reconstructed and overlaid on the visual display. The accuracy of the 1DCNN model was evaluated on a 2D plane under ten different bending conditions. The results demonstrate that the x-values of the predicted points show errors of under 0.590 mm from P0, while those of P1 and P2 show a biased error of about -1.5 mm with constant standard deviations. For the y-values, the error of all feature points under positive bending is uniformly estimated with under 1 mm of difference, when the error under negative bending increases depending on the amount of deformation. Finally, experiments using a collaborative robot validate its ability to visualize the actual swab's position and deformation on the camera image of 2D and 3D phantoms.

摘要

在 2019 年冠状病毒病大流行期间,开发了基于机器人的拭子采样系统,以减轻医护人员的负担并降低他们感染的风险。远程操作采样系统尤其受到赞赏,因为它们从根本上防止了与疑似 COVID-19 患者的接触。然而,安装摄像头的有限视野使得操作员无法识别插入鼻腔的拭子的位置和变形,这极大地降低了操作性能。为了克服这一限制,本研究提出了一种使用增强现实(AR)监测和重建 NP 拭子形状的视觉反馈系统。采样设备包含三个负载单元,并测量施加在拭子上的相互作用力,同时使用运动跟踪程序捕获形状信息。这些数据集用于训练一维卷积神经网络(1DCNN)模型,该模型估计了 2D X-Y 平面上拭子三个特征点的坐标。基于这些点,重建了反映实际拭子曲率的虚拟拭子形状,并将其叠加在视觉显示器上。在十种不同弯曲条件下,在 2D 平面上评估了 1DCNN 模型的准确性。结果表明,预测点的 x 值相对于 P0 的误差小于 0.590mm,而 P1 和 P2 的值则显示出约 1.5mm 的偏置误差,具有恒定的标准偏差。对于 y 值,在正向弯曲下,所有特征点的误差均匀估计为 1mm 以内,而在负向弯曲下,误差随变形量的增加而增加。最后,使用协作机器人进行的实验验证了其在 2D 和 3D 幻象的相机图像上可视化实际拭子位置和变形的能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ee9/10610820/b7c3cfb31dec/sensors-23-08443-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验