Hohman Fred, Hodas Nathan, Chau Duen Horng
College of Computing, Georgia Institute of Technology Atlanta, GA 30332, USA.
Data Sciences & Analytics, Pacific Northwest National Laboratory, Richland, WA 99354, USA.
Ext Abstr Hum Factors Computing Syst. 2017 May;2017:1694-1699. doi: 10.1145/3027063.3053103.
Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.
深度学习是许多近期技术背后的驱动力;然而,深度神经网络由于其内部复杂性难以理解,常被视为“黑匣子”。很少有研究致力于帮助人们探索和理解用户数据与深度学习模型中学习到的表示之间的关系。我们展示了我们正在进行的工作ShapeShop,这是一个用于可视化和理解神经网络模型学到了什么语义的交互式系统。ShapeShop使用标准网络技术构建,允许用户试验和比较深度学习模型,以帮助探索图像分类器的稳健性。