Suppr超能文献

用于前列腺组织切片综合分析的多模态显微镜图像自动配准

Automatic registration of multi-modal microscopy images for integrative analysis of prostate tissue sections.

作者信息

Lippolis Giuseppe, Edsjö Anders, Helczynski Leszek, Bjartell Anders, Overgaard Niels Chr

机构信息

Centre for Mathematical Sciences, Lund University, Lund, Sweden.

出版信息

BMC Cancer. 2013 Sep 5;13:408. doi: 10.1186/1471-2407-13-408.

Abstract

BACKGROUND

Prostate cancer is one of the leading causes of cancer related deaths. For diagnosis, predicting the outcome of the disease, and for assessing potential new biomarkers, pathologists and researchers routinely analyze histological samples. Morphological and molecular information may be integrated by aligning microscopic histological images in a multiplex fashion. This process is usually time-consuming and results in intra- and inter-user variability. The aim of this study is to investigate the feasibility of using modern image analysis methods for automated alignment of microscopic images from differently stained adjacent paraffin sections from prostatic tissue specimens.

METHODS

Tissue samples, obtained from biopsy or radical prostatectomy, were sectioned and stained with either hematoxylin & eosin (H&E), immunohistochemistry for p63 and AMACR or Time Resolved Fluorescence (TRF) for androgen receptor (AR). Image pairs were aligned allowing for translation, rotation and scaling. The registration was performed automatically by first detecting landmarks in both images, using the scale invariant image transform (SIFT), followed by the well-known RANSAC protocol for finding point correspondences and finally aligned by Procrustes fit. The Registration results were evaluated using both visual and quantitative criteria as defined in the text.

RESULTS

Three experiments were carried out. First, images of consecutive tissue sections stained with H&E and p63/AMACR were successfully aligned in 85 of 88 cases (96.6%). The failures occurred in 3 out of 13 cores with highly aggressive cancer (Gleason score ≥ 8). Second, TRF and H&E image pairs were aligned correctly in 103 out of 106 cases (97%).The third experiment considered the alignment of image pairs with the same staining (H&E) coming from a stack of 4 sections. The success rate for alignment dropped from 93.8% in adjacent sections to 22% for sections furthest away.

CONCLUSIONS

The proposed method is both reliable and fast and therefore well suited for automatic segmentation and analysis of specific areas of interest, combining morphological information with protein expression data from three consecutive tissue sections. Finally, the performance of the algorithm seems to be largely unaffected by the Gleason grade of the prostate tissue samples examined, at least up to Gleason score 7.

摘要

背景

前列腺癌是癌症相关死亡的主要原因之一。为了进行诊断、预测疾病结果以及评估潜在的新生物标志物,病理学家和研究人员通常会分析组织学样本。形态学和分子信息可以通过以多重方式对齐微观组织学图像来整合。这个过程通常很耗时,并且会导致用户内部和用户之间的变异性。本研究的目的是探讨使用现代图像分析方法对前列腺组织标本中不同染色的相邻石蜡切片的微观图像进行自动对齐的可行性。

方法

从活检或根治性前列腺切除术中获取的组织样本进行切片,并用苏木精和伊红(H&E)、p63和AMACR免疫组织化学或雄激素受体(AR)的时间分辨荧光(TRF)进行染色。图像对进行对齐,允许平移、旋转和缩放。通过首先使用尺度不变图像变换(SIFT)在两幅图像中检测地标,然后使用著名的RANSAC协议寻找点对应关系,最后通过Procrustes拟合进行对齐,来自动执行配准。使用文本中定义的视觉和定量标准评估配准结果。

结果

进行了三个实验。首先,在88例中的85例(96.6%)中,成功对齐了连续用H&E和p63/AMACR染色的组织切片图像。在13个高侵袭性癌症(Gleason评分≥8)的核心中有3个出现失败。其次,在106例中的103例(97%)中,TRF和H&E图像对正确对齐。第三个实验考虑了来自4个切片堆栈的相同染色(H&E)的图像对的对齐。对齐的成功率从相邻切片的93.8%下降到最远切片的22%。

结论

所提出的方法既可靠又快速,因此非常适合对特定感兴趣区域进行自动分割和分析,将形态学信息与来自三个连续组织切片的蛋白质表达数据相结合。最后,该算法的性能似乎在很大程度上不受所检查的前列腺组织样本的Gleason分级的影响,至少在Gleason评分7之前是这样。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3511/3847133/e90449402dc6/1471-2407-13-408-1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验