Suppr超能文献

组最大差异竞争:少量样本的模型比较

Group Maximum Differentiation Competition: Model Comparison with Few Samples.

作者信息

Ma Kede, Duanmu Zhengfang, Wang Zhou, Wu Qingbo, Liu Wentao, Yong Hongwei, Li Hongliang, Zhang Lei

出版信息

IEEE Trans Pattern Anal Mach Intell. 2020 Apr;42(4):851-864. doi: 10.1109/TPAMI.2018.2889948. Epub 2018 Dec 27.

Abstract

In many science and engineering fields that require computational models to predict certain physical quantities, we are often faced with the selection of the best model under the constraint that only a small sample set can be physically measured. One such example is the prediction of human perception of visual quality, where sample images live in a high dimensional space with enormous content variations. We propose a new methodology for model comparison named group maximum differentiation (gMAD) competition. Given multiple computational models, gMAD maximizes the chances of falsifying a "defender" model using the rest models as "attackers". It exploits the sample space to find sample pairs that maximally differentiate the attackers while holding the defender fixed. Based on the results of the attacking-defending game, we introduce two measures, aggressiveness and resistance, to summarize the performance of each model at attacking other models and defending attacks from other models, respectively. We demonstrate the gMAD competition using three examples-image quality, image aesthetics, and streaming video quality-of-experience. Although these examples focus on visually discriminable quantities, the gMAD methodology can be extended to many other fields, and is especially useful when the sample space is large, the physical measurement is expensive and the cost of computational prediction is low.

摘要

在许多需要计算模型来预测特定物理量的科学和工程领域,我们常常面临在仅能对一小部分样本集进行物理测量的约束条件下选择最佳模型的问题。一个这样的例子是人类视觉质量感知的预测,其中样本图像存在于具有巨大内容变化的高维空间中。我们提出了一种名为组最大差异(gMAD)竞争的模型比较新方法。给定多个计算模型,gMAD利用其余模型作为“攻击者”来最大化证伪“防御者”模型的机会。它利用样本空间找到在固定防御者的同时能最大程度区分攻击者的样本对。基于攻防博弈的结果,我们引入两种度量——攻击性和抗性,分别总结每个模型攻击其他模型以及抵御其他模型攻击的性能。我们使用图像质量、图像美学和流媒体视频体验质量这三个例子来演示gMAD竞争。尽管这些例子关注的是视觉上可区分的量,但gMAD方法可以扩展到许多其他领域,并且在样本空间大、物理测量成本高且计算预测成本低时特别有用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验