Sachpekidis Christos, Schepers Robin, Marti Monika, Kopp-Schneider Annette, Alberts Ian, Keramida Georgia, Afshar-Oromieh Ali, Rominger Axel
Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, 3010 Bern, Switzerland.
Department of Biostatistics, German Cancer Research Center, 69120 Heidelberg, Germany.
Diagnostics (Basel). 2020 Sep 17;10(9):709. doi: 10.3390/diagnostics10090709.
The aim of the present study is to evaluate the intra- and inter-observer agreement in assessing the renal function by means of Tc-MAG3 diuretic renography. One hundred and twenty adults were enrolled in the study. One experienced and one junior radiographer processed the renograms twice by assigning manual and semi-automated regions of interest. The differential renal function (DRF, %), time to maximum counts for the right and left kidney (T-T, min) and time to half-peak counts (T, min) were calculated. The Bland-Altman analysis (bias±95% limits of agreement), Lin's concordance correlation coefficient and weighted Fleiss' kappa coefficient were used to assess agreement. Based on the Bland-Altman analysis, the intra-observer repeatability results for the experienced radiographer using the manual and the semi-automated techniques were 0.2 ± 2.6% and 0.3 ± 6.4% (DRF), respectively, -0.01 ± 0.24 and 0.00 ± 0.34 (T), respectively, and 0.00 ± 0.26 and 0.00 ± 0.33 (T), respectively. For the junior radiographer, the respective results were 0.5 ± 5.0% and 0.8 ± 9.4% (DRF), 0.00 ± 0.44 and 0.01 ± 0.28 (T), and 0.01 ± 0.28 and -0.02 ± 0.44 (T). The inter-observer repeatability for the manual method was 0.6 ± 5.0% (DRF), -0.10 ± 0.42 (T) and -0.05 ± 0.38 (T), and for the semi-automated method -0.2 ± 9.1% (DRF), 0.00 ± 0.31 (T) and -0.05 ± 0.40 (T). The weighted Fleiss' kappa coefficient for the T assessments ranged between 0.85-0.97 for both intra- and inter-observer repeatability with both methods. These findings suggest a very good repeatability in DRF assessment with the manual method-especially for the experienced observer-but a less good repeatability with the semi-automated approach. The calculation of T was also operator-dependent. We conclude that reader experience is important in the calculation of renal parameters. We therefore encourage reader training in renal scintigraphy. Moreover, the manual tool seems to perform better than the semi-automated tool. Thus, we encourage cautious use of automated tools and adjunct validation by manual methods where possible.
本研究的目的是评估通过锝-巯基乙酰三甘氨酸(Tc-MAG3)利尿肾图评估肾功能时观察者内和观察者间的一致性。120名成年人参与了该研究。一名经验丰富的放射技师和一名初级放射技师通过手动和半自动感兴趣区的划分对肾图进行了两次处理。计算了分肾功能(DRF,%)、左右肾最大计数时间(T-T,分钟)和半峰计数时间(T,分钟)。采用Bland-Altman分析(偏差±95%一致性界限)、Lin一致性相关系数和加权Fleiss'kappa系数来评估一致性。基于Bland-Altman分析,经验丰富的放射技师使用手动和半自动技术的观察者内重复性结果分别为0.2±2.6%和0.3±6.4%(DRF),-0.01±0.24和0.00±0.34(T),以及0.00±0.26和0.00±0.33(T)。对于初级放射技师,相应结果分别为0.5±5.0%和0.8±9.4%(DRF),0.00±0.44和0.01±0.28(T),以及0.01±0.28和-0.02±0.44(T)。手动方法的观察者间重复性为0.6±5.0%(DRF),-0.10±0.42(T)和-0.05±0.38(T),半自动方法为-0.2±9.1%(DRF),0.00±0.31(T)和-0.05±0.40(T)。两种方法的T评估的加权Fleiss'kappa系数在观察者内和观察者间重复性方面均在0.85 - 0.97之间。这些发现表明,手动方法在DRF评估中具有非常好的重复性——尤其是对于经验丰富的观察者——但半自动方法的重复性较差。T的计算也依赖于操作者。我们得出结论,读者经验在肾参数计算中很重要。因此,我们鼓励对肾闪烁扫描的读者进行培训。此外,手动工具似乎比半自动工具表现更好。因此,我们鼓励谨慎使用自动化工具,并在可能的情况下通过手动方法进行辅助验证。