Center for Life Nano- and Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161, Rome, Italy.
D-TAILS srl, 00161, Rome, Italy.
Sci Rep. 2022 May 21;12(1):8623. doi: 10.1038/s41598-022-12571-0.
Blind-structured illumination microscopy (blind-SIM) enhances the optical resolution without the requirement of nonlinear effects or pre-defined illumination patterns. It is thus advantageous in experimental conditions where toxicity or biological fluctuations are an issue. In this work, we introduce a custom convolutional neural network architecture for blind-SIM: BS-CNN. We show that BS-CNN outperforms other blind-SIM deconvolution algorithms providing a resolution improvement of 2.17 together with a very high Fidelity (artifacts reduction). Furthermore, BS-CNN proves to be robust in cross-database variability: it is trained on synthetically augmented open-source data and evaluated on experiments. This approach paves the way to the employment of CNN-based deconvolution in all scenarios in which a statistical model for the illumination is available while the specific realizations are unknown or noisy.
盲结构光照明显微镜(blind-SIM)增强了光学分辨率,而无需使用非线性效应或预定义的照明模式。因此,在存在毒性或生物波动等问题的实验条件下,它具有优势。在这项工作中,我们引入了一种用于盲结构照明显微镜的自定义卷积神经网络架构:BS-CNN。我们表明,BS-CNN 优于其他盲结构照明显微镜反卷积算法,在提供 2.17 的分辨率改进的同时,还具有非常高的保真度(减少伪影)。此外,BS-CNN 在跨数据库变异性方面表现稳健:它是在合成增强的开源数据上进行训练,并在实验中进行评估。这种方法为在存在照明的统计模型而具体实现未知或嘈杂的所有情况下使用基于 CNN 的反卷积铺平了道路。