IEEE Trans Neural Netw Learn Syst. 2014 May;25(5):870-81. doi: 10.1109/TNNLS.2013.2281761.
The task of structured output prediction deals with learning general functional dependencies between arbitrary input and output spaces. In this context, two loss-sensitive formulations for maximum-margin training have been proposed in the literature, which are referred to as margin and slack rescaling, respectively. The latter is believed to be more accurate and easier to handle. Nevertheless, it is not popular due to the lack of known efficient inference algorithms; therefore, margin rescaling--which requires a similar type of inference as normal structured prediction--is the most often used approach. Focusing on the task of label sequence learning, we here define a general framework that can handle a large class of inference problems based on Hamming-like loss functions and the concept of decomposability for the underlying joint feature map. In particular, we present an efficient generic algorithm that can handle both rescaling approaches and is guaranteed to find an optimal solution in polynomial time.
结构化输出预测的任务是学习任意输入和输出空间之间的通用函数依赖关系。在这方面,文献中提出了两种用于最大边缘训练的损失敏感公式,分别称为边缘和松弛重缩放。后者被认为更准确,也更容易处理。然而,由于缺乏已知的有效推理算法,它并不流行;因此,边缘重缩放(需要与正常的结构化预测类似的推理类型)是最常用的方法。本文聚焦于标签序列学习任务,定义了一个通用框架,该框架可以基于汉明损失函数和底层联合特征图的可分解性概念处理一大类推理问题。具体来说,我们提出了一种高效的通用算法,它可以处理两种重缩放方法,并保证在多项式时间内找到最优解。