IEEE Trans Neural Netw Learn Syst. 2013 Sep;24(9):1377-89. doi: 10.1109/TNNLS.2013.2254721.
We propose a direct approach to learning sparse kernel classifiers for multi-instance (MI) classification to improve efficiency while maintaining predictive accuracy. The proposed method builds on a convex formulation for MI classification by considering the average score of individual instances for bag-level prediction. In contrast, existing formulations used the maximum score of individual instances in each bag, which leads to nonconvex optimization problems. Based on the convex MI framework, we formulate a sparse kernel learning algorithm by imposing additional constraints on the objective function to enforce the maximum number of expansions allowed in the prediction function. The formulated sparse learning problem for the MI classification is convex with respect to the classifier weights. Therefore, we can employ an effective optimization strategy to solve the optimization problem that involves the joint learning of both the classifier and the expansion vectors. In addition, the proposed formulation can explicitly control the complexity of the prediction model while still maintaining competitive predictive performance. Experimental results on benchmark data sets demonstrate that our proposed approach is effective in building very sparse kernel classifiers while achieving comparable performance to the state-of-the-art MI classifiers.
我们提出了一种直接的方法来学习多实例(MI)分类的稀疏核分类器,以在保持预测准确性的同时提高效率。所提出的方法基于 MI 分类的凸公式,通过考虑每个实例的平均分数来进行袋级预测。相比之下,现有的公式在每个袋中使用单个实例的最大分数,这导致了非凸优化问题。基于凸 MI 框架,我们通过在目标函数上施加附加约束来构建稀疏核学习算法,以强制预测函数中允许的扩展数量的最大值。所提出的 MI 分类稀疏学习问题相对于分类器权重是凸的。因此,我们可以采用有效的优化策略来解决涉及分类器和扩展向量联合学习的优化问题。此外,所提出的公式可以明确地控制预测模型的复杂性,同时仍然保持与最先进的 MI 分类器相当的预测性能。在基准数据集上的实验结果表明,我们的方法在构建非常稀疏的核分类器方面是有效的,同时达到了与最先进的 MI 分类器相当的性能。