Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA.
Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, 25 Shattuck St, Boston, MA, USA; Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, USA.
Neuroimage. 2022 Oct 15;260:119474. doi: 10.1016/j.neuroimage.2022.119474. Epub 2022 Jul 13.
The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.
从磁共振成像 (MRI) 数据中去除非脑信号,即头骨剥离,是许多神经影像分析流程的一个组成部分。尽管它们很丰富,但流行的经典头骨剥离方法通常针对具有特定采集属性的图像进行定制,即近各向同性分辨率和 T1 加权 (T1w) MRI 对比,这些属性在研究环境中很常见。因此,现有的工具往往难以适应其他类型的图像,例如在临床上常见的快速自旋回波 (FSE) MRI 采集的厚切片堆栈。虽然近年来基于学习的脑提取方法已经引起了关注,但这些方法面临着类似的负担,因为它们仅对训练过程中看到的图像类型有效。为了在成像协议的广阔领域中实现稳健的头骨剥离,我们引入了 SynthStrip,这是一种快速的、基于学习的脑提取工具。通过利用解剖分割来生成一个完全合成的训练数据集,其中包含解剖结构、强度分布和伪影,远远超出了医学图像的实际范围,SynthStrip 学会了成功地推广到各种真实采集的脑图像,而无需具有目标对比的训练数据。我们展示了 SynthStrip 在不同的图像采集和分辨率下对不同人群的有效性,从新生儿到成年人都有涉及。与流行的头骨剥离基线相比,我们的方法在准确性上有了显著提高 - 所有这些都只需要一个训练好的模型。我们的方法和标记评估数据可在 https://w3id.org/synthstrip 上获得。