IEEE Trans Med Imaging. 2015 May;34(5):1125-39. doi: 10.1109/TMI.2014.2376872. Epub 2014 Dec 2.
Electron and light microscopy imaging can now deliver high-quality image stacks of neural structures. However, the amount of human annotation effort required to analyze them remains a major bottleneck. While machine learning algorithms can be used to help automate this process, they require training data, which is time-consuming to obtain manually, especially in image stacks. Furthermore, due to changing experimental conditions, successive stacks often exhibit differences that are severe enough to make it difficult to use a classifier trained for a specific one on another. This means that this tedious annotation process has to be repeated for each new stack. In this paper, we present a domain adaptation algorithm that addresses this issue by effectively leveraging labeled examples across different acquisitions and significantly reducing the annotation requirements. Our approach can handle complex, nonlinear image feature transformations and scales to large microscopy datasets that often involve high-dimensional feature spaces and large 3D data volumes. We evaluate our approach on four challenging electron and light microscopy applications that exhibit very different image modalities and where annotation is very costly. Across all applications we achieve a significant improvement over the state-of-the-art machine learning methods and demonstrate our ability to greatly reduce human annotation effort.
电子显微镜和光学显微镜成像现在可以提供高质量的神经结构图像堆栈。然而,分析这些图像堆栈所需的人工注释工作量仍然是一个主要瓶颈。虽然机器学习算法可以用于帮助自动化这个过程,但它们需要训练数据,这些数据需要手动获取,这非常耗时,尤其是在图像堆栈中。此外,由于实验条件的变化,连续的堆栈通常会表现出严重的差异,使得很难在另一个特定的堆栈上使用为特定堆栈训练的分类器。这意味着必须为每个新堆栈重复这个繁琐的注释过程。在本文中,我们提出了一种域自适应算法,通过有效地利用不同采集的标记示例来解决这个问题,并显著减少注释要求。我们的方法可以处理复杂的非线性图像特征变换,并扩展到通常涉及高维特征空间和大型 3D 数据量的大规模显微镜数据集。我们在四个具有挑战性的电子显微镜和光学显微镜应用中评估了我们的方法,这些应用表现出非常不同的图像模态,并且注释非常昂贵。在所有应用中,我们都实现了比最先进的机器学习方法的显著改进,并证明了我们能够大大减少人工注释工作量。