Suppr超能文献

哪种深度学习模型最能解释类别内样本的对象表示?

Which deep learning model can best explain object representations of within-category exemplars?

机构信息

Cognitive Science Research Group, Korea Brain Research Institute, Daegu, Republic of Korea.

出版信息

J Vis. 2021 Sep 1;21(10):12. doi: 10.1167/jov.21.10.12.

Abstract

Deep neural network (DNN) models realize human-equivalent performance in tasks such as object recognition. Recent developments in the field have enabled testing the hierarchical similarity of object representation between the human brain and DNNs. However, the representational geometry of object exemplars within a single category using DNNs is unclear. In this study, we investigate which DNN model has the greatest ability to explain invariant within-category object representations by computing the similarity between representational geometries of visual features extracted at the high-level layers of different DNN models. We also test for the invariability of within-category object representations of these models by identifying object exemplars. Our results show that transfer learning models based on ResNet50 best explained both within-category object representation and object identification. These results suggest that the invariability of object representations in deep learning depends not on deepening the neural network but on building a better transfer learning model.

摘要

深度神经网络 (DNN) 模型在物体识别等任务中实现了与人类相当的性能。该领域的最新进展使得可以测试大脑和 DNN 之间物体表示的分层相似性。然而,使用 DNN 对单个类别中的物体样本的表示几何形状尚不清楚。在这项研究中,我们通过计算不同 DNN 模型的高层视觉特征提取的表示几何之间的相似性,来研究哪种 DNN 模型具有最大的能力来解释类别内的不变物体表示。我们还通过识别物体样本来测试这些模型的类别内物体表示的不变性。我们的结果表明,基于 ResNet50 的迁移学习模型可以最好地解释类别内物体表示和物体识别。这些结果表明,深度学习中物体表示的不变性不仅取决于神经网络的加深,还取决于构建更好的迁移学习模型。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验