Miyawaki Yoichi, Uchida Hajime, Yamashita Okito, Sato Masa-aki, Morito Yusuke, Tanabe Hiroki C, Sadato Norihiro, Kamitani Yukiyasu
National Institute of Information and Communications Technology, Kyoto, Japan.
Neuron. 2008 Dec 10;60(5):915-29. doi: 10.1016/j.neuron.2008.11.004.
Perceptual experience consists of an enormous number of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as it is impractical to specify brain activity for all possible images. In this study, we reconstructed visual images by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 x 10-patch images (2(100) possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns.
感知体验由大量可能的状态组成。先前的功能磁共振成像(fMRI)研究通过将大脑活动分类到预先指定的类别中来预测感知状态。无约束的视觉图像重建更具挑战性,因为为所有可能的图像指定大脑活动是不切实际的。在本研究中,我们通过组合多个尺度的局部图像基来重建视觉图像,这些图像基的对比度通过自动选择相关体素并利用它们的相关模式从fMRI活动中独立解码。通过仅测量数百个随机图像的大脑活动,在单次试验或体素基础上准确重建了二元对比度、10×10补丁图像(2^100种可能状态),且无需任何图像先验信息。重建还用于在数百万个候选图像中识别呈现的图像。结果表明,我们的方法提供了一种有效的手段,可从大脑活动中读出复杂的感知状态,同时发现多体素模式中的信息表征。