Transplant Institute, Beth Israel Deaconess Medical Center, Boston, Massachusetts 02215, USA.
ASAIO J. 2011 Jul-Aug;57(4):300-9. doi: 10.1097/MAT.0b013e318222db30.
Predicting the outcome of kidney transplantation is important in optimizing transplantation parameters and modifying factors related to the recipient, donor, and transplant procedure. As patients with end-stage renal disease (ESRD) secondary to lupus nephropathy are generally younger than the typical ESRD patients and also seem to have inferior transplant outcome, developing an outcome prediction model in this patient category has high clinical relevance. The goal of this study was to compare methods of building prediction models of kidney transplant outcome that potentially can be useful for clinical decision support. We applied three well-known data mining methods (classification trees, logistic regression, and artificial neural networks) to the data describing recipients with systemic lupus erythematosus (SLE) in the US Renal Data System (USRDS) database. The 95% confidence interval (CI) of the area under the receiver-operator characteristic curves (AUC) was used to measure the discrimination ability of the prediction models. Two groups of predictors were selected to build the prediction models. Using input variables based on Weka (a open source machine learning software) supplemented with additional variables of known clinical relevance (38 total predictors), the logistic regression performed the best overall (AUC: 0.74, 95% CI: 0.72-0.77)-significantly better (p < 0.05) than the classification trees (AUC: 0.70, 95% CI: 0.67-0.72) but not significantly better (p = 0.218) than the artificial neural networks (AUC: 0.71, 95% CI: 0.69-0.73). The performance of the artificial neural networks was not significantly better than that of the classification trees (p = 0.693). Using the more parsimonious subset of variables (six variables), the logistic regression (AUC: 0.73, 95% CI: 0.71-0.75) did not perform significantly better than either the classification tree (AUC: 0.70, 95% CI: 0.68-0.73) or the artificial neural network (AUC: 0.73, 95% CI: 0.70-0.75) models. We generated several models predicting 3-year allograft survival in kidney transplant recipients with SLE that potentially can be used in practice. The performance of logistic regression and classification tree was not inferior to more complex artificial neural network. Prediction models may be used in clinical practice to identify patients at risk.
预测肾移植的结果对于优化移植参数以及修改与受者、供者和移植手术相关的因素非常重要。由于终末期肾病(ESRD)继发于狼疮性肾炎的患者通常比典型的 ESRD 患者年轻,并且移植结果似乎也较差,因此在该患者群体中开发一种结果预测模型具有很高的临床相关性。本研究的目的是比较构建肾移植结果预测模型的方法,这些方法可能对临床决策支持有用。我们将三种著名的数据挖掘方法(分类树、逻辑回归和人工神经网络)应用于美国肾脏数据系统(USRDS)数据库中描述系统性红斑狼疮(SLE)受者的数据。接收器操作特征曲线(ROC)下的 95%置信区间(CI)用于测量预测模型的区分能力。选择两组预测变量来构建预测模型。使用基于 Weka(一种开源机器学习软件)的输入变量,并补充已知临床相关性的其他变量(总共 38 个预测变量),逻辑回归总体表现最佳(AUC:0.74,95%CI:0.72-0.77)-明显优于分类树(AUC:0.70,95%CI:0.67-0.72),但与人工神经网络(AUC:0.71,95%CI:0.69-0.73)无显著差异(p=0.218)。人工神经网络的性能与分类树无显著差异(p=0.693)。使用更简洁的变量子集(六个变量),逻辑回归(AUC:0.73,95%CI:0.71-0.75)与分类树(AUC:0.70,95%CI:0.68-0.73)或人工神经网络(AUC:0.73,95%CI:0.70-0.75)模型均无显著差异。我们生成了几种预测狼疮性肾炎肾移植受者 3 年移植物存活率的模型,这些模型可能在实践中使用。逻辑回归和分类树的性能并不逊于更复杂的人工神经网络。预测模型可用于临床实践,以识别有风险的患者。