Department of Urology, School of Medicine, Fujita Health University, Toyoake, Japan.
Fujita Cancer Center, Fujita Health University, Toyoake, Japan.
Cancer Rep (Hoboken). 2023 Sep;6(9):e1861. doi: 10.1002/cnr2.1861. Epub 2023 Jul 14.
We recently reported the importance of deep learning (DL) of pelvic magnetic resonance imaging in predicting the degree of urinary incontinence (UI) following robot-assisted radical prostatectomy (RARP). However, our results were limited because the prediction accuracy was approximately 70%.
To develop a more precise prediction model that can inform patients about UI recovery post-RARP surgery using a DL model based on intraoperative video images.
The study cohort comprised of 101 patients with localized prostate cancer undergoing RARP. Three snapshots from intraoperative video recordings showing the pelvic cavity (prior to bladder neck incision, immediately following prostate removal, and after vesicourethral anastomosis) were evaluated, including pre- and intraoperative parameters. We evaluated the DL model plus simple or ensemble machine learning (ML), and the area under the receiver operating characteristic curve (AUC) was analyzed through sensitivity and specificity. Of 101, 64 and 37 patients demonstrated "early continence (using 0 or 1 safety pad at 3 months post-RARP)" and "late continence (others)," respectively, at 3 months postoperatively. The combination of DL and simple ML using intraoperative video snapshots with clinicopathological parameters had a notably high performance (AUC, 0.683-0.749) to predict early recovery from UI after surgery. Furthermore, combining DL with ensemble artificial neural network using intraoperative video snapshots had the highest performance (AUC, 0.882; sensitivity, 92.2%; specificity, 78.4%; overall accuracy, 85.3%) to predict early recovery from post-RARP incontinence, with similar results by internal validation. The addition of clinicopathological parameters showed no additive effects for each analysis using DL, EL and simple ML.
Our findings suggest that the DL algorithm with intraoperative video imaging is a reliable method for informing patients about the severity of their recovery from UI after RARP, although it is not clear if our methods are reproducible for predicting long-term UI and pad-free continence.
我们最近报道了深度学习(DL)在预测机器人辅助根治性前列腺切除术(RARP)后尿失禁(UI)程度中的重要性。然而,我们的结果是有限的,因为预测准确性约为 70%。
开发一种更精确的预测模型,使用基于术中视频图像的 DL 模型为 RARP 术后的 UI 恢复提供信息。
研究队列包括 101 例局部前列腺癌患者,行 RARP 手术。评估了术中视频记录中显示盆腔的 3 个快照(膀胱颈切开前、前列腺切除后立即和膀胱尿道吻合后),包括术前和术中参数。我们评估了 DL 模型加简单或集成机器学习(ML),并通过灵敏度和特异性分析了接收者操作特征曲线(AUC)下的面积。在 101 例患者中,64 例和 37 例患者分别在术后 3 个月时表现为“早期控尿(RARP 术后 3 个月时使用 0 或 1 个安全垫)”和“晚期控尿(其他)”。使用术中视频快照和临床病理参数的 DL 和简单 ML 的组合对预测术后 UI 早期恢复具有显著的高性能(AUC,0.683-0.749)。此外,使用术中视频快照和集成人工神经网络的 DL 组合具有最高的性能(AUC,0.882;灵敏度,92.2%;特异性,78.4%;总准确率,85.3%)来预测 RARP 后失禁的早期恢复,内部验证也得到了类似的结果。在使用 DL、EL 和简单 ML 的每种分析中,添加临床病理参数对每个分析都没有附加效果。
我们的研究结果表明,DL 算法结合术中视频成像对于告知患者 RARP 术后 UI 恢复的严重程度是一种可靠的方法,尽管我们的方法是否可以用于预测长期 UI 和无垫控尿尚不清楚。