Artificial Intelligence in Medical Imaging, German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
Institute for Signal Processing, University of Lübeck, Schleswig-Holstein, Germany.
Methods Inf Med. 2024 May;63(1-02):11-20. doi: 10.1055/s-0044-1778694. Epub 2024 Jan 23.
In this paper, an artificial intelligence-based algorithm for predicting the optimal contrast medium dose for computed tomography (CT) angiography of the aorta is presented and evaluated in a clinical study. The prediction of the contrast dose reduction is modelled as a classification problem using the image contrast as the main feature.
This classification is performed by random decision forests (RDF) and k-nearest-neighbor methods (KNN). For the selection of optimal parameter subsets all possible combinations of the 22 clinical parameters (age, blood pressure, etc.) are considered using the classification accuracy and precision of the KNN classifier and RDF as quality criteria. Subsequently, the results of the evaluation were optimized by means of feature transformation using regression neural networks (RNN). These were used for a direct classification based on regressed Hounsfield units as well as preprocessing for a subsequent KNN classification.
For feature selection, an RDF model achieved the highest accuracy of 84.42% and a KNN model achieved the best precision of 86.21%. The most important parameters include age, height, and hemoglobin. The feature transformation using an RNN considerably exceeded these values with an accuracy of 90.00% and a precision of 97.62% using all 22 parameters as input. However, also the feasibility of the parameter sets in routine clinical practice has to be considered, because some of the 22 parameters are not measured in routine clinical practice and additional measurement time of 15 to 20 minutes per patient is needed. Using the standard feature set available in clinical routine the best accuracy of 86.67% and precision of 93.18% was achieved by the RNN.
We developed a reliable hybrid system that helps radiologists determine the optimal contrast dose for CT angiography based on patient-specific parameters.
本文提出了一种基于人工智能的算法,用于预测计算机断层扫描(CT)血管造影主动脉的最佳对比剂剂量,并在临床研究中进行了评估。对比度剂量降低的预测被建模为一个分类问题,主要特征是图像对比度。
该分类通过随机决策森林(RDF)和 K 最近邻方法(KNN)来实现。为了选择最佳参数子集,使用 KNN 分类器和 RDF 的分类精度和精度作为质量标准,考虑了 22 个临床参数(年龄、血压等)的所有可能组合。然后,通过使用回归神经网络(RNN)进行特征转换对评估结果进行优化。这些方法用于基于回归后的亨氏单位的直接分类,以及随后的 KNN 分类的预处理。
对于特征选择,RDF 模型的准确率最高为 84.42%,KNN 模型的精度最高为 86.21%。最重要的参数包括年龄、身高和血红蛋白。使用 RNN 进行特征转换,使用所有 22 个参数作为输入,准确率达到 90.00%,精度达到 97.62%,大大超过了这些值。但是,还必须考虑在常规临床实践中使用这些参数集的可行性,因为在常规临床实践中并非测量所有 22 个参数,并且每个患者还需要额外的 15 到 20 分钟的测量时间。使用常规临床实践中可用的标准特征集,RNN 达到了最佳的准确率 86.67%和精度 93.18%。
我们开发了一种可靠的混合系统,可帮助放射科医生根据患者的具体参数确定 CT 血管造影的最佳对比剂剂量。