Steiermärkische Krankenanstaltengesellschaft m.b.H. (KAGes), Billrothgasse 18a, 8010, Graz, Austria.
Institute of Neural Engineering, Graz University of Technology, Stremayrgasse 16/IV, 8010, Graz, Austria.
Sci Rep. 2024 Nov 6;14(1):26972. doi: 10.1038/s41598-024-76424-8.
Procedural coding presents a taxing challenge for clinicians. However, recent advances in natural language processing offer a promising avenue for developing applications that assist clinicians, thereby alleviating their administrative burdens. This study seeks to create an application capable of predicting procedure codes by analysing clinicians' operative notes, aiming to streamline their workflow and enhance efficiency. We downstreamed an existing and a native German medical BERT model in a secondary use scenario, utilizing already coded surgery notes to model the coding procedure as a multi-label classification task. In comparison to the transformer-based architecture, we were levering the non-contextual model fastText, a convolutional neural network, a support vector machine and logistic regression for a comparative analysis of possible coding performance. About 350,000 notes were used for model adaption. By considering the top five suggested procedure codes from medBERT.de, surgeryBERT.at, fastText, a convolutional neural network, a support vector machine and a logistic regression, the mean average precision achieved was 0.880, 0.867, 0.870, 0.851, 0.870 and 0.805 respectively. Support vector machines performed better for surgery reports with a sequence length greater than 512, achieving a mean average precision of 0.872 in comparison to 0.840 for fastText, 0.837 for medBERT.de and 0.820 for surgeryBERT.at. A prototypical front-end application for coding support was additionally implemented. The problem of predicting procedure codes from a given operative report can be successfully modelled as a multi-label classification task, with a promising performance. Support vector machines as a classical machine learning method outperformed the non-contextual fastText approach. FastText with less demanding hardware resources has reached a similar performance to BERT-based models and has shown to be more suitable for explaining the predictions efficiently.
程序编码对临床医生来说是一项艰巨的挑战。然而,自然语言处理的最新进展为开发协助临床医生的应用程序提供了一个有前途的途径,从而减轻他们的行政负担。本研究旨在创建一个能够通过分析临床医生的手术记录来预测程序代码的应用程序,旨在简化他们的工作流程并提高效率。我们在二次使用场景中对现有的和原生德语医学 BERT 模型进行了下游处理,利用已经编码的手术记录来模拟编码过程作为多标签分类任务。与基于转换器的架构相比,我们利用了非上下文模型 fastText、卷积神经网络、支持向量机和逻辑回归来进行可能的编码性能的比较分析。大约 35 万条记录用于模型适配。考虑到 medBERT.de、surgeryBERT.at、fastText、卷积神经网络、支持向量机和逻辑回归中建议的前五个程序代码,平均精度分别为 0.880、0.867、0.870、0.851、0.870 和 0.805。对于序列长度大于 512 的手术报告,支持向量机的性能更好,平均精度为 0.872,而 fastText 为 0.840,medBERT.de 为 0.837,surgeryBERT.at 为 0.820。此外,还实现了一个用于编码支持的原型前端应用程序。从给定手术报告预测程序代码的问题可以成功地建模为多标签分类任务,具有有前途的性能。作为一种经典的机器学习方法,支持向量机的性能优于非上下文 fastText 方法。具有较低硬件资源要求的 fastText 达到了与基于 BERT 的模型相似的性能,并已被证明更适合有效地解释预测。