Sanders Jonathan, Singh Anil, Sterne Gabriella, Ye Bing, Zhou Jie
Department of Computer Science, Northern Illinois University, DeKalb, IL, 60115, USA.
Life Sciences Institute and Department of Cell and Developmental Biology University of Michigan, Ann Arbor, MI, 48109, USA.
BMC Bioinformatics. 2015 May 28;16:177. doi: 10.1186/s12859-015-0616-y.
The subcellular distribution of synapses is fundamentally important for the assembly, function, and plasticity of the nervous system. Automated and effective quantification tools are a prerequisite to large-scale studies of the molecular mechanisms of subcellular synapse distribution. Common practices for synapse quantification in neuroscience labs remain largely manual or semi-manual. This is mainly due to computational challenges in automatic quantification of synapses, including large volume, high dimensions and staining artifacts. In the case of confocal imaging, optical limit and xy-z resolution disparity also require special considerations to achieve the necessary robustness.
A novel algorithm is presented in the paper for learning-guided automatic recognition and quantification of synaptic markers in 3D confocal images. The method developed a discriminative model based on 3D feature descriptors that detected the centers of synaptic markers. It made use of adaptive thresholding and multi-channel co-localization to improve the robustness. The detected markers then guided the splitting of synapse clumps, which further improved the precision and recall of the detected synapses. Algorithms were tested on lobula plate tangential cells (LPTCs) in the brain of Drosophila melanogaster, for GABAergic synaptic markers on axon terminals as well as dendrites.
The presented method was able to overcome the staining artifacts and the fuzzy boundaries of synapse clumps in 3D confocal image, and automatically quantify synaptic markers in a complex neuron such as LPTC. Comparison with some existing tools used in automatic 3D synapse quantification also proved the effectiveness of the proposed method.
突触的亚细胞分布对于神经系统的组装、功能和可塑性至关重要。自动化且有效的量化工具是大规模研究亚细胞突触分布分子机制的先决条件。神经科学实验室中突触量化的常见做法在很大程度上仍然是手动或半手动的。这主要是由于突触自动量化存在计算挑战,包括体积大、维度高和染色伪影。对于共聚焦成像而言,光学极限和xy - z分辨率差异也需要特殊考虑以实现必要的稳健性。
本文提出了一种用于在三维共聚焦图像中进行学习引导的突触标记自动识别和量化的新算法。该方法基于检测突触标记中心的三维特征描述符开发了一种判别模型。它利用自适应阈值处理和多通道共定位来提高稳健性。检测到的标记随后引导突触团块的分割,这进一步提高了检测到的突触的精度和召回率。算法在果蝇大脑中的小叶板切向细胞(LPTCs)上进行了测试,用于轴突末端以及树突上的GABA能突触标记。
所提出的方法能够克服三维共聚焦图像中突触团块的染色伪影和模糊边界,并自动量化复杂神经元(如LPTC)中的突触标记。与一些用于自动三维突触量化的现有工具的比较也证明了该方法的有效性。