Berkeley Institute of Data Science, University of California, Berkeley, CA, USA; Lawrence Berkeley National Laboratory, Berkeley, CA, USA; Departamento de Engenharia de Teleinformática, Universidade Federal do Ceará, Fortaleza, CE, Brazil; Instituto Federal de Educação, Ciência e Tecnologia do Ceará, Maracanaú, CE, Brazil.
Instituto Federal de Educação, Ciência e Tecnologia do Ceará, Maracanaú, CE, Brazil.
Comput Methods Programs Biomed. 2019 Dec;182:105053. doi: 10.1016/j.cmpb.2019.105053. Epub 2019 Aug 26.
Saliency refers to the visual perception quality that makes objects in a scene to stand out from others and attract attention. While computational saliency models can simulate the expert's visual attention, there is little evidence about how these models perform when used to predict the cytopathologist's eye fixations. Saliency models may be the key to instrumenting fast object detection on large Pap smear slides under real noisy conditions, artifacts, and cell occlusions. This paper describes how our computational schemes retrieve regions of interest (ROI) of clinical relevance using visual attention models. We also compare the performance of different computed saliency models as part of cell screening tasks, aiming to design a computer-aided diagnosis systems that supports cytopathologists.
We record eye fixation maps from cytopathologists at work, and compare with 13 different saliency prediction algorithms, including deep learning. We develop cell-specific convolutional neural networks (CNN) to investigate the impact of bottom-up and top-down factors on saliency prediction from real routine exams. By combining the eye tracking data from pathologists with computed saliency models, we assess algorithms reliability in identifying clinically relevant cells.
The proposed cell-specific CNN model outperforms all other saliency prediction methods, particularly regarding the number of false positives. Our algorithm also detects the most clinically relevant cells, which are among the three top salient regions, with accuracy above 98% for all diseases, except carcinoma (87%). Bottom-up methods performed satisfactorily, with saliency maps that enabled ROI detection above 75% for carcinoma and 86% for other pathologies.
ROIs extraction using our saliency prediction methods enabled ranking the most relevant clinical areas within the image, a viable data reduction strategy to guide automatic analyses of Pap smear slides. Top-down factors for saliency prediction on cell images increases the accuracy of the estimated maps while bottom-up algorithms proved to be useful for predicting the cytopathologist's eye fixations depending on parameters, such as the number of false positive and negative. Our contributions are: comparison among 13 state-of-the-art saliency models to cytopathologists' visual attention and deliver a method that the associate the most conspicuous regions to clinically relevant cells.
显著度是指场景中的物体从其他物体中脱颖而出并吸引注意力的视觉感知质量。虽然计算显著度模型可以模拟专家的视觉注意力,但关于这些模型在用于预测细胞病理学家的眼睛注视时的表现,证据很少。显著度模型可能是在真实嘈杂的条件下、在存在伪影和细胞遮挡的情况下,对大型巴氏涂片幻灯片上的快速对象检测进行仪器化的关键。本文描述了我们的计算方案如何使用视觉注意模型检索具有临床相关性的感兴趣区域(ROI)。我们还比较了不同计算显著度模型作为细胞筛选任务的一部分的性能,旨在设计支持细胞病理学家的计算机辅助诊断系统。
我们从工作中的细胞病理学家那里记录眼动图,并将其与 13 种不同的显著度预测算法进行比较,包括深度学习。我们开发了细胞特异性卷积神经网络(CNN),以研究底向上和顶向下因素对来自真实常规检查的显著度预测的影响。通过将病理学家的眼动追踪数据与计算显著度模型相结合,我们评估了算法在识别临床相关细胞方面的可靠性。
提出的细胞特异性 CNN 模型优于所有其他显著度预测方法,特别是在假阳性数量方面。我们的算法还检测到最具临床相关性的细胞,这些细胞在前三名最显著区域中,除了癌(87%)外,所有疾病的准确率均超过 98%。底向上方法表现令人满意,使用显著度图可以检测到 75%以上的癌和 86%以上的其他病变的 ROI。
使用我们的显著度预测方法提取 ROI 可以在图像内对最相关的临床区域进行排名,这是一种可行的数据减少策略,可以指导巴氏涂片幻灯片的自动分析。细胞图像的顶向下显著度预测因素提高了估计图的准确性,而底向上算法则被证明在根据参数(如假阳性和假阴性的数量)预测细胞病理学家的眼睛注视时很有用。我们的贡献是:将 13 种最先进的显著度模型与细胞病理学家的视觉注意力进行比较,并提供一种将最显眼的区域与临床相关细胞联系起来的方法。