Suppr超能文献

利用深度学习在高通量筛选工作流程中对传统管道进行泛化。

Generalising from conventional pipelines using deep learning in high-throughput screening workflows.

机构信息

National Department of Neurosurgery, Centre Hospitalier de Luxembourg, 4, Rue Ernest Barble, 1210, Luxembourg (City), Luxembourg.

Interventional Neuroscience Group, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg.

出版信息

Sci Rep. 2022 Jul 6;12(1):11465. doi: 10.1038/s41598-022-15623-7.

Abstract

The study of complex diseases relies on large amounts of data to build models toward precision medicine. Such data acquisition is feasible in the context of high-throughput screening, in which the quality of the results relies on the accuracy of the image analysis. Although state-of-the-art solutions for image segmentation employ deep learning approaches, the high cost of manually generating ground truth labels for model training hampers the day-to-day application in experimental laboratories. Alternatively, traditional computer vision-based solutions do not need expensive labels for their implementation. Our work combines both approaches by training a deep learning network using weak training labels automatically generated with conventional computer vision methods. Our network surpasses the conventional segmentation quality by generalising beyond noisy labels, providing a 25% increase of mean intersection over union, and simultaneously reducing the development and inference times. Our solution was embedded into an easy-to-use graphical user interface that allows researchers to assess the predictions and correct potential inaccuracies with minimal human input. To demonstrate the feasibility of training a deep learning solution on a large dataset of noisy labels automatically generated by a conventional pipeline, we compared our solution against the common approach of training a model from a small manually curated dataset by several experts. Our work suggests that humans perform better in context interpretation, such as error assessment, while computers outperform in pixel-by-pixel fine segmentation. Such pipelines are illustrated with a case study on image segmentation for autophagy events. This work aims for better translation of new technologies to real-world settings in microscopy-image analysis.

摘要

复杂疾病的研究依赖于大量的数据来建立精准医学模型。这种数据采集在高通量筛选的背景下是可行的,其结果的质量依赖于图像分析的准确性。尽管用于图像分割的最先进的解决方案采用了深度学习方法,但为模型训练手动生成地面真实标签的高成本阻碍了其在实验实验室中的日常应用。另一方面,传统的基于计算机视觉的解决方案不需要昂贵的标签即可实现。我们的工作通过使用传统计算机视觉方法自动生成的弱训练标签来训练深度学习网络,将这两种方法结合起来。我们的网络通过对噪声标签进行泛化来超越传统的分割质量,提供了 25%的平均交并比提高,并同时减少了开发和推理时间。我们的解决方案被嵌入到一个易于使用的图形用户界面中,允许研究人员在最小的人工输入下评估预测并纠正潜在的不准确之处。为了证明在由传统流水线自动生成的大量噪声标签的数据集上训练深度学习解决方案的可行性,我们将我们的解决方案与从小型手动策划数据集训练模型的常见方法进行了比较,该数据集由几位专家完成。我们的工作表明,人类在上下文解释方面表现更好,例如错误评估,而计算机在逐像素精细分割方面表现更好。此类流水线通过对自噬事件的图像分割进行案例研究来说明。这项工作旨在更好地将新技术转化为显微镜图像分析的实际环境。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验