Ruiz Austin J, Hernández Torres Sofia I, Snider Eric J
Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA.
J Imaging. 2025 Jul 5;11(7):222. doi: 10.3390/jimaging11070222.
Thoracic injuries account for a high percentage of combat casualty mortalities, with 80% of preventable deaths resulting from abdominal or thoracic hemorrhage. An effective method for detecting and triaging thoracic injuries is point-of-care ultrasound (POCUS), as it is a cheap and portable noninvasive imaging method. POCUS image interpretation of pneumothorax (PTX) or hemothorax (HTX) injuries requires a skilled radiologist, which will likely not be available in austere situations where injury detection and triage are most critical. With the recent growth in artificial intelligence (AI) for healthcare, the hypothesis for this study is that deep learning (DL) models for classifying images as showing HTX or PTX injury, or being negative for injury can be developed for lowering the skill threshold for POCUS diagnostics on the future battlefield. Three-class deep learning classification AI models were developed using a motion-mode ultrasound dataset captured in animal study experiments from more than 25 swine subjects. Cluster analysis was used to define the "population" based on brightness, contrast, and kurtosis properties. A MobileNetV3 DL model architecture was tuned across a variety of hyperparameters, with the results ultimately being evaluated using images captured in real-time. Different hyperparameter configurations were blind-tested, resulting in models trained on filtered data having a real-time accuracy from 89 to 96%, as opposed to 78-95% when trained without filtering and optimization. The best model achieved a blind accuracy of 85% when inferencing on data collected in real-time, surpassing previous YOLOv8 models by 17%. AI models can be developed that are suitable for high performance in real-time for thoracic injury determination and are suitable for potentially addressing challenges with responding to emergency casualty situations and reducing the skill threshold for using and interpreting POCUS.
胸部损伤在战斗伤亡死亡率中占比很高,80%的可预防死亡是由腹部或胸部出血导致的。一种检测和分诊胸部损伤的有效方法是即时超声检查(POCUS),因为它是一种廉价且便携的非侵入性成像方法。对气胸(PTX)或血胸(HTX)损伤进行POCUS图像解读需要技术熟练的放射科医生,而在伤情检测和分诊最为关键的严峻情况下,这可能无法实现。随着近期医疗保健领域人工智能(AI)的发展,本研究的假设是,可以开发深度学习(DL)模型,将图像分类为显示HTX或PTX损伤,或为损伤阴性,以降低未来战场上POCUS诊断的技术门槛。使用从25头以上猪的动物研究实验中捕获的运动模式超声数据集开发了三类深度学习分类AI模型。聚类分析用于根据亮度、对比度和峰度属性定义“群体”。对MobileNetV3 DL模型架构进行了各种超参数调整,最终使用实时捕获的图像进行评估。对不同的超参数配置进行了盲测,结果显示,在经过滤波的数据上训练的模型实时准确率为89%至96%,而在未经过滤波和优化的情况下训练时,准确率为78%至95%。最佳模型在对实时收集的数据进行推理时,盲测准确率达到85%,比之前的YOLOv8模型高出17%。可以开发出适用于实时高效确定胸部损伤的AI模型,这些模型可能有助于应对紧急伤亡情况,并降低使用和解读POCUS的技术门槛。