Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA.
Bioengineering Department, University of California, Los Angeles, CA, USA.
Nat Methods. 2019 Jan;16(1):103-110. doi: 10.1038/s41592-018-0239-0. Epub 2018 Dec 17.
We present deep-learning-enabled super-resolution across different fluorescence microscopy modalities. This data-driven approach does not require numerical modeling of the imaging process or the estimation of a point-spread-function, and is based on training a generative adversarial network (GAN) to transform diffraction-limited input images into super-resolved ones. Using this framework, we improve the resolution of wide-field images acquired with low-numerical-aperture objectives, matching the resolution that is acquired using high-numerical-aperture objectives. We also demonstrate cross-modality super-resolution, transforming confocal microscopy images to match the resolution acquired with a stimulated emission depletion (STED) microscope. We further demonstrate that total internal reflection fluorescence (TIRF) microscopy images of subcellular structures within cells and tissues can be transformed to match the results obtained with a TIRF-based structured illumination microscope. The deep network rapidly outputs these super-resolved images, without any iterations or parameter search, and could serve to democratize super-resolution imaging.
我们提出了一种基于深度学习的跨不同荧光显微镜模式的超分辨率方法。这种数据驱动的方法不需要对成像过程进行数值建模或估计点扩散函数,而是基于训练生成对抗网络 (GAN) 将衍射受限的输入图像转换为超分辨率图像。使用这个框架,我们提高了使用低数值孔径物镜采集的宽场图像的分辨率,达到了使用高数值孔径物镜采集的分辨率。我们还展示了跨模态超分辨率,将共聚焦显微镜图像转换为与受激发射损耗 (STED) 显微镜采集的分辨率匹配。我们进一步证明,可以将细胞和组织内亚细胞结构的全内反射荧光 (TIRF) 显微镜图像转换为与基于 TIRF 的结构光照亮显微镜获得的结果相匹配。深度网络可以快速输出这些超分辨率图像,无需任何迭代或参数搜索,这可能有助于普及超分辨率成像。