IEEE Trans Neural Netw Learn Syst. 2021 May;32(5):2285-2291. doi: 10.1109/TNNLS.2020.2995920. Epub 2021 May 3.
This brief studies a variation of the stochastic multiarmed bandit (MAB) problems, where the agent knows the a priori knowledge named the near-optimal mean reward (NoMR). In common MAB problems, an agent tries to find the optimal arm without knowing the optimal mean reward. However, in more practical applications, the agent can usually get an estimation of the optimal mean reward defined as NoMR. For instance, in an online Web advertising system based on MAB methods, a user's near-optimal average click rate (NoMR) can be roughly estimated from his/her demographic characteristics. As a result, application of the NoMR is efficient at improving the algorithm's performance. First, we formalize the stochastic MAB problem by knowing the NoMR that is in between the suboptimal mean reward and the optimal mean reward. Second, we use the cumulative regret as the performance metric for our problem, and we get that this problem's lower bound of the cumulative regret is Ω(1/∆) , where ∆ is the difference between the suboptimal mean reward and the optimal mean reward. Compared with the conventional MAB problem with the increasing logarithmic lower bound of the regret, our regret lower bound is uniform with the learning step. Third, a novel algorithm, NoMR-BANDIT, is set forth to solve this problem. In NoMR-BANDIT, the NoMR is used to design an efficient exploration strategy. In addition, we analyzed the regret's upper bound in NoMR-BANDIT and concluded that it also has a uniform upper bound of O(1/∆) , which is in the same order as the lower bound. Consequently, NoMR-BANDIT is an optimal algorithm of this problem. To enhance our method's generalization, CASCADE-BANDIT based on NoMR-BANDIT is proposed to solve the problem, where NoMR is less than the suboptimal mean reward. CASCADE-BANDIT has an upper bound of O(∆logn) , where n represents the learning step, and the order of O(∆logn) is the same with that of the conventional MAB methods. Finally, extensive experimental results demonstrated that the established NoMR-BANDIT is more efficient than the compared bandit solutions. After sufficient iterations, NOMR-BANDIT saved 10%-80% more cumulative regret than the state of the art.
本研究探讨了随机多臂赌博机(MAB)问题的一种变体,其中代理知道先验知识,即近最优平均奖励(NoMR)。在常见的 MAB 问题中,代理试图在不知道最优平均奖励的情况下找到最优臂。然而,在更实际的应用中,代理通常可以获得最优平均奖励的估计,定义为 NoMR。例如,在基于 MAB 方法的在线网络广告系统中,可以根据用户的人口统计特征大致估计用户的近最优平均点击率(NoMR)。因此,应用 NoMR 可以有效地提高算法的性能。首先,我们通过知道介于次优平均奖励和最优平均奖励之间的 NoMR 来形式化随机 MAB 问题。其次,我们使用累积后悔作为我们问题的性能指标,并且我们得到这个问题的累积后悔的下界是 Ω(1/∆),其中 ∆ 是次优平均奖励和最优平均奖励之间的差异。与具有递增对数后悔下限的传统 MAB 问题相比,我们的后悔下限与学习步骤一致。第三,提出了一种新的算法 NoMR-BANDIT 来解决这个问题。在 NoMR-BANDIT 中,NoMR 用于设计一种有效的探索策略。此外,我们分析了 NoMR-BANDIT 中的后悔上限,并得出结论,它也具有一致的 O(1/∆)上限,这与下限相同。因此,NoMR-BANDIT 是该问题的最优算法。为了增强我们方法的泛化能力,提出了基于 NoMR-BANDIT 的 CASCADE-BANDIT 来解决问题,其中 NoMR 小于次优平均奖励。CASCADE-BANDIT 的上限为 O(∆logn),其中 n 表示学习步骤,并且 O(∆logn)的顺序与传统 MAB 方法的顺序相同。最后,广泛的实验结果表明,所建立的 NoMR-BANDIT 比比较的博彩解决方案更有效。经过足够的迭代,NoMR-BANDIT 比最先进的技术节省了 10%-80%的累积后悔。