Albarqouni Shadi
IEEE Pulse. 2018 Sep-Oct;9(5):21. doi: 10.1109/MPUL.2018.2866356.
One of the major challenges currently facing researchers in applying deep learning (DL) models to medical image analysis is the limited amount of annotated data. Collecting such ground-truth annotations requires domain knowledge, cost, and time, making it infeasible for large-scale databases. Albarqouni et al. [S5] presented a novel concept for learning DL models from noisy annotations collected through crowdsourcing platforms (e.g., Amazon Mechanical Turk and Crowdflower) by introducing a robust aggregation layer to the convolutional neural networks (Figure S2). Their proposed method was validated on a publicly available database on breast cancer histology images, showing astonishing results of their robust aggregation method compared to the baseline of majority voting. In follow-up work, Albarqouni et al. [S6] introduced the novel concept of a translation from an image to a video game object for biomedical images. This technique allows medical images to be represented as star-shaped objects that can be easily embedded into a readily available game canvas. The proposed method reduces the necessity of domain knowledge for annotations. Exciting and promising results were reported compared to the conventional crowdsourcing platforms.
当前,研究人员在将深度学习(DL)模型应用于医学图像分析时面临的主要挑战之一是标注数据量有限。收集此类真实标注需要领域知识、成本和时间,这使得大规模数据库无法实现。Albarqouni等人[S5]提出了一个新颖的概念,即通过向卷积神经网络引入一个稳健的聚合层,从通过众包平台(如亚马逊土耳其机器人和Crowdflower)收集的噪声标注中学习DL模型(图S2)。他们提出的方法在一个关于乳腺癌组织学图像的公开数据库上得到了验证,与多数投票基线相比,其稳健聚合方法显示出惊人的结果。在后续工作中,Albarqouni等人[S6]引入了将生物医学图像从图像转换为视频游戏对象的新颖概念。该技术允许将医学图像表示为星形对象,这些对象可以轻松嵌入到现成的游戏画布中。所提出的方法减少了标注所需的领域知识。与传统众包平台相比,报告了令人兴奋和有前景的结果。