Drukker Karen, Horsch Karla, Giger Maryellen L
Department of Radiology MC2026, University of Chicago, IL 60637, USA.
Acad Radiol. 2005 Aug;12(8):970-9. doi: 10.1016/j.acra.2005.04.014.
The purpose of this study is to investigate the use of computer-extracted features of lesions imaged by means of two modalities, mammography and breast ultrasound, in the computerized classification of breast lesions.
We performed computerized analysis on a database of 97 patients with a total of 100 lesions (40 malignant, 40 benign solid, and 20 cystic lesions). Mammograms and ultrasound images were available for these breast lesions. There was an average of three mammographic images and two ultrasound images per lesion. Based on seed points indicated by a radiologist, the computer automatically segmented lesions from the parenchymal background and automatically extracted a set of characteristic features for each lesion. For each feature, its value averaged over all images pertaining to a given lesion was input to a Bayesian neural network for classification. We also investigated different approaches to combine image-based features into this by-lesion analysis. In that analysis, mean, maximum, and minimum feature values were considered for all images representing a lesion. We considered performance by using a leave-one-lesion-out approach, based on image features from mammography alone (two to five features), ultrasound alone (three to four features), and a combination of features from both modalities (three to five features total).
For the classification task of distinguishing cancer from other abnormalities in a lesion-based analysis by using a single modality, areas under the receiver operating characteristic curves (A(z) values) increased significantly when the computer selected the manner (mean, minimum, or maximum) in which image-based features were combined into lesion-based features. The highest performance was found for lesion-based analysis and automated feature selection from mean, maximum, and minimum values of features from both modalities (resulting in a total of four features being used). That A(z) value for the task of distinguishing cancer was 0.92, showing a statistically significant increase over that achieved with features from either mammography or ultrasound alone.
Computerized classification of cancer significantly improved when lesion features from both modalities were combined. Classification performance depended on specific methods for combining features from multiple images per lesion. These results are encouraging and warrant further exploration of computerized methods for multimodality imaging.
本研究旨在探讨利用计算机提取的通过乳腺X线摄影和乳腺超声这两种模式成像的病变特征,对乳腺病变进行计算机化分类。
我们对一个包含97例患者共100个病变(40个恶性、40个良性实性和20个囊性病变)的数据库进行了计算机化分析。这些乳腺病变均有乳腺X线摄影图像和超声图像。每个病变平均有三张乳腺X线摄影图像和两张超声图像。基于放射科医生指示的种子点,计算机自动从实质背景中分割病变,并自动为每个病变提取一组特征。对于每个特征,将其在与给定病变相关的所有图像上的平均值输入到贝叶斯神经网络进行分类。我们还研究了将基于图像的特征组合到这种逐病变分析中的不同方法。在该分析中,考虑了代表一个病变的所有图像的平均、最大和最小特征值。我们基于仅来自乳腺X线摄影(两到五个特征)、仅来自超声(三到四个特征)以及两种模式特征的组合(总共三到五个特征),采用留一病变法来评估性能。
在基于病变的分析中,对于使用单一模式区分癌症与其他异常的分类任务,当计算机选择将基于图像的特征组合成基于病变的特征的方式(平均、最小或最大)时,受试者操作特征曲线下面积(A(z)值)显著增加。在基于病变的分析以及从两种模式的特征的平均、最大和最小值进行自动特征选择中发现了最高性能(总共使用四个特征)。区分癌症任务的该A(z)值为0.92,与仅使用乳腺X线摄影或超声特征所达到的值相比有统计学显著提高。
当结合两种模式的病变特征时,癌症的计算机化分类显著改善。分类性能取决于将每个病变的多个图像的特征进行组合的特定方法。这些结果令人鼓舞,值得进一步探索用于多模态成像的计算机化方法。