Suppr超能文献

用于人工智能算法开发的拟议客观经阴道超声图像质量评分系统的观察者内和观察者间一致性

Intra- and interobserver agreement of proposed objective transvaginal ultrasound image-quality scoring system for use in artificial intelligence algorithm development.

作者信息

Deslandes A, Avery J C, Chen H-T, Leonardi M, Knox S, Lo G, O'Hara R, Condous G, Hull M L

机构信息

Robinson Research Institute, University of Adelaide, Adelaide, Australia.

School of Computer and Mathematical Sciences, University of Adelaide, Adelaide, Australia.

出版信息

Ultrasound Obstet Gynecol. 2025 Mar;65(3):364-371. doi: 10.1002/uog.29178. Epub 2025 Jan 24.

Abstract

OBJECTIVES

The development of valuable artificial intelligence (AI) tools to assist with ultrasound diagnosis depends on algorithms developed using high-quality data. This study aimed to test the intra- and interobserver agreement of a proposed image-quality scoring system to quantify the quality of gynecological transvaginal ultrasound (TVS) images, which could be used in clinical practice and AI tool development.

METHODS

A proposed scoring system to quantify TVS image quality was created following a review of the literature. This system involved a score of 1-4 (2 = poor, 3 = suboptimal and 4 = optimal image quality) assigned by a rater for individual ultrasound images. If the image was deemed inaccurate, it was assigned a score of 1, corresponding to 'reject'. Six professionals, including two radiologists, two sonographers and two sonologists, reviewed 150 images (50 images of the uterus and 100 images of the ovaries) obtained from 50 women, assigning each image a score of 1-4. The review of all images was repeated a second time by each rater after a period of at least 1 week. Mean scores were calculated for each rater. Overall interobserver agreement was assessed using intraclass correlation coefficient (ICC), and interobserver agreement between paired professionals and intraobserver agreement for all professionals were assessed using weighted Cohen's kappa and ICC.

RESULTS

Poor levels of interobserver agreement were obtained between the six raters for all 150 images (ICC, 0.480 (95% CI, 0.363-0.586)), as well as for assessment of the uterine images only (ICC, 0.359 (95% CI, 0.204-0.523)). Moderate agreement was achieved for the ovarian images (ICC, 0.531 (95% CI, 0.417-0.636)). Agreement between the paired sonographers and sonologists was poor for all images (ICC, 0.336 (95% CI, -0.078 to 0.619) and 0.425 (95% CI, 0.014-0.665), respectively), as well as when images were grouped into uterine images (ICC, 0.253 (95% CI, -0.097 to 0.577) and 0.299 (95% CI, -0.094 to 0.606), respectively) and ovarian images (ICC, 0.400 (95% CI, -0.043 to 0.669) and 0.469 (95% CI, 0.088-0.689), respectively). Agreement between the paired radiologists was moderate for all images (ICC, 0.600 (95% CI, 0.487-0.693)) and for their assessment of uterine images (ICC, 0.538 (95% CI, 0.311-0.707)) and ovarian images (ICC, 0.621 (95% CI, 0.483-0.728)). Weak-to-moderate intraobserver agreement was seen for each of the raters with weighted Cohen's kappa ranging from 0.533 to 0.718 for all images and from 0.467 to 0.751 for ovarian images. Similarly, for all raters, the ICC indicated moderate-to-good intraobserver agreement for all images overall (ICC ranged from 0.636 to 0.825) and for ovarian images (ICC ranged from 0.596 to 0.862). Slightly better intraobserver agreement was seen for uterine images, with weighted Cohen's kappa ranging from 0.568 to 0.808 indicating weak-to-strong agreement, and ICC ranging from 0.546 to 0.893 indicating moderate-to-good agreement. All measures were statistically significant (P < 0.001).

CONCLUSION

The proposed image quality scoring system was shown to have poor-to-moderate interobserver agreement and mostly weak-to-moderate levels of intraobserver agreement. More refinement of the scoring system may be needed to improve agreement, although it remains unclear whether quantification of image quality can be achieved, given the highly subjective nature of ultrasound interpretation. Although some AI systems can tolerate labeling noise, most will favor clean (high-quality) data. As such, innovative data-labeling strategies are needed. © 2025 The Author(s). Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.

摘要

目的

开发有价值的人工智能(AI)工具以辅助超声诊断依赖于使用高质量数据开发的算法。本研究旨在测试一种拟议的图像质量评分系统在观察者内和观察者间的一致性,该系统用于量化妇科经阴道超声(TVS)图像的质量,可用于临床实践和AI工具开发。

方法

在回顾文献后创建了一种拟议的评分系统来量化TVS图像质量。该系统由评估者为单个超声图像分配1 - 4分(2 = 质量差,3 = 次优,4 = 图像质量最佳)。如果图像被认为不准确,则分配1分,对应“拒绝”。六名专业人员,包括两名放射科医生、两名超声技师和两名超声科医生,对从50名女性获得的150张图像(50张子宫图像和100张卵巢图像)进行了评估,为每张图像分配1 - 4分。在至少1周的间隔后,每位评估者再次对所有图像进行评估。计算每位评估者的平均得分。使用组内相关系数(ICC)评估总体观察者间一致性,使用加权科恩kappa系数和ICC评估配对专业人员之间的观察者间一致性以及所有专业人员的观察者内一致性。

结果

六名评估者对所有150张图像的观察者间一致性水平较差(ICC,0.480(95%CI,0.363 - 0.586)),仅对子宫图像的评估也是如此(ICC,0.359(95%CI,0.204 - 0.523))。对卵巢图像的一致性为中等(ICC,0.531(95%CI,0.417 - 0.636))。配对的超声技师和超声科医生对所有图像的一致性较差(ICC分别为0.336(95%CI, - 0.078至0.619)和0.425(95%CI,0.014 - 0.665)),当图像分为子宫图像(ICC分别为0.253(95%CI, - 0.097至0.577)和0.299(95%CI, - 0.094至0.606))和卵巢图像时也是如此(ICC分别为0.400(95%CI, - 0.043至0.669)和0.469(95%CI,0.088 - 0.689))。配对放射科医生对所有图像的一致性中等(ICC,0.600(95%CI,0.487 - 0.693)),对子宫图像(ICC,0.538(95%CI,0.311 - 0.707))和卵巢图像(ICC,0.621(95%CI,0.483 - 0.728))的评估也是如此。每位评估者的观察者内一致性为弱至中等,所有图像的加权科恩kappa系数范围为0.533至0.718,卵巢图像的范围为0.467至0.751。同样,对于所有评估者,ICC表明所有图像总体的观察者内一致性为中等至良好(ICC范围为0.636至0.825),卵巢图像的ICC范围为0.596至0.862。子宫图像的观察者内一致性略好,加权科恩kappa系数范围为0.568至0.808表明为弱至强一致性,ICC范围为0.546至0.893表明为中等至良好一致性。所有测量均具有统计学意义(P < 0.001)。

结论

拟议的图像质量评分系统显示观察者间一致性为差至中等,观察者内一致性大多为弱至中等。可能需要对评分系统进行更多改进以提高一致性,尽管鉴于超声解读的高度主观性,尚不清楚是否能够实现图像质量的量化。虽然一些AI系统可以容忍标注噪声,但大多数将青睐干净(高质量)的数据。因此,需要创新的数据标注策略。© 2025作者。《妇产科超声》由John Wiley & Sons Ltd代表国际妇产科超声学会出版。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8719/11872342/fe930cee911e/UOG-65-364-g002.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验