• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能何时会出错?对自然图像故障的直观推断。

When will AI misclassify? Intuiting failures on natural images.

机构信息

Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.

Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.

出版信息

J Vis. 2023 Apr 3;23(4):4. doi: 10.1167/jov.23.4.4.

DOI:10.1167/jov.23.4.4
PMID:37022698
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10082388/
Abstract

Machine recognition systems now rival humans in their ability to classify natural images. However, their success is accompanied by a striking failure: a tendency to commit bizarre misclassifications on inputs specifically selected to fool them. What do ordinary people know about the nature and prevalence of such classification errors? Here, five experiments exploit the recent discovery of "natural adversarial examples" to ask whether naive observers can predict when and how machines will misclassify natural images. Whereas classical adversarial examples are inputs that have been minimally perturbed to induce misclassifications, natural adversarial examples are simply unmodified natural photographs that consistently fool a wide variety of machine recognition systems. For example, a bird casting a shadow might be misclassified as a sundial, or a beach umbrella made of straw might be misclassified as a broom. In Experiment 1, subjects accurately predicted which natural images machines would misclassify and which they would not. Experiments 2 through 4 extended this ability to how the images would be misclassified, showing that anticipating machine misclassifications goes beyond merely identifying an image as nonprototypical. Finally, Experiment 5 replicated these findings under more ecologically valid conditions, demonstrating that subjects can anticipate misclassifications not only under two-alternative forced-choice conditions (as in Experiments 1-4), but also when the images appear one at a time in a continuous stream-a skill that may be of value to human-machine teams. We suggest that ordinary people can intuit how easy or hard a natural image is to classify, and we discuss the implications of these results for practical and theoretical issues at the interface of biological and artificial vision.

摘要

机器识别系统在分类自然图像的能力上现在已经可以与人类相媲美。然而,它们的成功伴随着一个惊人的失败:它们有一种倾向,会对专门设计来欺骗它们的输入产生离奇的错误分类。那么普通人对这种分类错误的性质和普遍性了解多少呢?在这里,五个实验利用最近发现的“自然对抗样本”来探讨,天真的观察者是否能够预测机器何时以及如何错误地分类自然图像。虽然经典的对抗样本是经过最小程度的扰动以诱导错误分类的输入,但自然对抗样本只是未经修改的自然照片,它们会一直欺骗各种机器识别系统。例如,一只投下阴影的鸟可能被错误分类为日晷,或者一把用稻草制成的沙滩伞可能被错误分类为扫帚。在实验 1 中,被试准确地预测了机器将错误分类哪些自然图像,以及哪些不会。实验 2 到 4 将这种能力扩展到了图像将如何被错误分类,表明预测机器错误分类不仅仅是识别一个图像是否非典型。最后,实验 5 在更符合生态条件下复制了这些发现,证明被试不仅可以在二选一的强制选择条件下(如实验 1-4)预测错误分类,还可以在图像一次一个地出现在连续流中预测错误分类——这一技能可能对人机团队有价值。我们认为,普通人可以凭直觉判断一个自然图像是容易还是难以分类,我们还讨论了这些结果对生物和人工视觉界面的实际和理论问题的影响。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/d53b7870ad0f/jovi-23-4-4-f005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/826ec9626d67/jovi-23-4-4-f001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/7d1dee804764/jovi-23-4-4-f002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/e153faa25f0a/jovi-23-4-4-f003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/26445a6ecec0/jovi-23-4-4-f004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/d53b7870ad0f/jovi-23-4-4-f005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/826ec9626d67/jovi-23-4-4-f001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/7d1dee804764/jovi-23-4-4-f002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/e153faa25f0a/jovi-23-4-4-f003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/26445a6ecec0/jovi-23-4-4-f004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4dc0/10082388/d53b7870ad0f/jovi-23-4-4-f005.jpg

相似文献

1
When will AI misclassify? Intuiting failures on natural images.人工智能何时会出错?对自然图像故障的直观推断。
J Vis. 2023 Apr 3;23(4):4. doi: 10.1167/jov.23.4.4.
2
Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations.深度学习的皮肤病学图像识别系统的对抗攻击:由于不可检测的图像干扰而导致误诊的风险。
Medicine (Baltimore). 2020 Dec 11;99(50):e23568. doi: 10.1097/MD.0000000000023568.
3
Humans can decipher adversarial images.人类可以解读对抗性图像。
Nat Commun. 2019 Mar 22;10(1):1334. doi: 10.1038/s41467-019-08931-6.
4
Development of an artificial intelligence-based assessment model for prediction of embryo viability using static images captured by optical light microscopy during IVF.开发一种基于人工智能的评估模型,用于通过体外受精期间光学显微镜拍摄的静态图像预测胚胎活力。
Hum Reprod. 2020 Apr 28;35(4):770-784. doi: 10.1093/humrep/deaa013.
5
Subtle adversarial image manipulations influence both human and machine perception.微妙的对抗性图像操纵会影响人类和机器的感知。
Nat Commun. 2023 Aug 15;14(1):4933. doi: 10.1038/s41467-023-40499-0.
6
Can Negation Be Depicted? Comparing Human and Machine Understanding of Visual Representations.可否描绘否定?比较人类与机器对视觉表象的理解。
Cogn Sci. 2023 Mar;47(3):e13258. doi: 10.1111/cogs.13258.
7
Approaching Adversarial Example Classification with Chaos Theory.用混沌理论处理对抗性示例分类问题。
Entropy (Basel). 2020 Oct 24;22(11):1201. doi: 10.3390/e22111201.
8
Perception without preconception: comparison between the human and machine learner in recognition of tissues from histological sections.无预设认知的感知:在识别组织切片方面,人与机器学习者的比较。
Sci Rep. 2022 Sep 30;12(1):16420. doi: 10.1038/s41598-022-20012-1.
9
Machine printed text and handwriting identification in noisy document images.噪声文档图像中的机器打印文本和手写识别。
IEEE Trans Pattern Anal Mach Intell. 2004 Mar;26(3):337-53. doi: 10.1109/TPAMI.2004.1262324.
10
[Artificial intelligence in gastroenterology].[胃肠病学中的人工智能]
Dtsch Med Wochenschr. 2020 Oct;145(20):1450-1454. doi: 10.1055/a-1013-6593. Epub 2020 Oct 6.

本文引用的文献

1
Can You Hear Me Now? Sensitive Comparisons of Human and Machine Perception.你现在能听到我吗?人与机器感知的敏感性比较。
Cogn Sci. 2022 Oct;46(10):e13191. doi: 10.1111/cogs.13191.
2
Understanding transformation tolerant visual object representations in the human brain and convolutional neural networks.理解人类大脑和卷积神经网络中对变换具有容忍度的视觉对象表示。
Neuroimage. 2022 Nov;263:119635. doi: 10.1016/j.neuroimage.2022.119635. Epub 2022 Sep 15.
3
Data quality of platforms and panels for online behavioral research.在线行为研究的平台和面板的数据质量。
Behav Res Methods. 2022 Aug;54(4):1643-1662. doi: 10.3758/s13428-021-01694-3. Epub 2021 Sep 29.
4
Five points to check when comparing visual perception in humans and machines.比较人与机器的视觉感知时需要检查的五个要点。
J Vis. 2021 Mar 1;21(3):16. doi: 10.1167/jov.21.3.16.
5
Performance vs. competence in human-machine comparisons.人机比较中的表现与能力。
Proc Natl Acad Sci U S A. 2020 Oct 27;117(43):26562-26571. doi: 10.1073/pnas.1905334117. Epub 2020 Oct 13.
6
What do adversarial images tell us about human vision?对抗图像能告诉我们人类视觉什么?
Elife. 2020 Sep 2;9:e55978. doi: 10.7554/eLife.55978.
7
Deep Learning: The Good, the Bad, and the Ugly.深度学习:好的、坏的和丑的。
Annu Rev Vis Sci. 2019 Sep 15;5:399-426. doi: 10.1146/annurev-vision-091718-014951. Epub 2019 Aug 8.
8
Humans can decipher adversarial images.人类可以解读对抗性图像。
Nat Commun. 2019 Mar 22;10(1):1334. doi: 10.1038/s41467-019-08931-6.
9
Adversarial attacks on medical machine learning.对医学机器学习的对抗攻击。
Science. 2019 Mar 22;363(6433):1287-1289. doi: 10.1126/science.aaw4399.
10
Deep convolutional networks do not classify based on global object shape.深度卷积网络不是基于全局物体形状进行分类的。
PLoS Comput Biol. 2018 Dec 7;14(12):e1006613. doi: 10.1371/journal.pcbi.1006613. eCollection 2018 Dec.