Ndayikengurukiye Didier, Mignotte Max
Département d'Informatique et de Recherche Opérationnelles, Université de Montréal, Montréal, QC H3T 1J4, Canada.
J Imaging. 2022 Apr 13;8(4):110. doi: 10.3390/jimaging8040110.
The effortless detection of salient objects by humans has been the subject of research in several fields, including computer vision, as it has many applications. However, salient object detection remains a challenge for many computer models dealing with color and textured images. Most of them process color and texture and therefore implicitly consider them as independent features which is not the case in reality. Herein, we propose a novel and efficient strategy, through a simple model, almost without internal parameters, which generates a robust saliency map for a natural image. This strategy consists of integrating color information into local textural patterns to characterize a color micro-texture. It is the simple, yet powerful LTP (Local Ternary Patterns) texture descriptor applied to opposing color pairs of a color space that allows us to achieve this end. Each color micro-texture is represented by a vector whose components are from a superpixel obtained by the SLICO (Simple Linear Iterative Clustering with zero parameter) algorithm, which is simple, fast and exhibits state-of-the-art boundary adherence. The degree of dissimilarity between each pair of color micro-textures is computed by the FastMap method, a fast version of MDS (Multi-dimensional Scaling) that considers the color micro-textures' non-linearity while preserving their distances. These degrees of dissimilarity give us an intermediate saliency map for each RGB (Red-Green-Blue), HSL (Hue-Saturation-Luminance), LUV (L for luminance, U and V represent chromaticity values) and CMY (Cyan-Magenta-Yellow) color space. The final saliency map is their combination to take advantage of the strength of each of them. The MAE (Mean Absolute Error), MSE (Mean Squared Error) and β measures of our saliency maps, on the five most used datasets show that our model outperformed several state-of-the-art models. Being simple and efficient, our model could be combined with classic models using color contrast for a better performance.
人类能够轻松检测出显著物体,这已成为包括计算机视觉在内的多个领域的研究课题,因为它有许多应用。然而,对于许多处理彩色和纹理图像的计算机模型来说,显著物体检测仍然是一个挑战。它们中的大多数处理颜色和纹理,因此隐含地将它们视为独立的特征,而实际情况并非如此。在此,我们提出了一种新颖且高效的策略,通过一个几乎没有内部参数的简单模型,为自然图像生成一个强大的显著图。该策略包括将颜色信息整合到局部纹理模式中,以表征颜色微纹理。正是将简单而强大的局部三值模式(LTP)纹理描述符应用于颜色空间的对立颜色对,才使我们能够实现这一目标。每个颜色微纹理由一个向量表示,其分量来自通过简单线性迭代聚类零参数(SLICO)算法获得的超像素,该算法简单、快速且具有一流的边界贴合度。每对颜色微纹理之间的不相似度通过快速映射(FastMap)方法计算,它是多维缩放(MDS)的快速版本,在保留颜色微纹理距离的同时考虑其非线性。这些不相似度为每个RGB(红-绿-蓝)、HSL(色相-饱和度-亮度)、LUV(L表示亮度,U和V表示色度值)和CMY(青-品红-黄)颜色空间给出一个中间显著图。最终的显著图是它们的组合,以利用它们各自的优势。我们的显著图在五个最常用数据集上的平均绝对误差(MAE)、均方误差(MSE)和β度量表明,我们的模型优于几个现有最先进的模型。由于简单高效,我们的模型可以与使用颜色对比度的经典模型相结合,以获得更好的性能。