Chen Hsin-Chen, Jia Wenyan, Sun Xin, Li Zhaoxin, Li Yuecheng, Fernstrom John D, Burke Lora E, Baranowski Thomas, Sun Mingui
Department of Radiation Oncology, Washington University in Saint Louis, Saint Louis, MO, USA ; Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA.
Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA.
Meas Sci Technol. 2015 Feb;26(2). doi: 10.1088/0957-0233/26/2/025702.
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods.
基于图像的饮食评估最近在肥胖研究领域受到了广泛关注。在这种评估中,数字图片中的食物被识别出来,并估算其份量大小(体积)。尽管目前最常用的方法是人工处理,但图像处理具有很大的潜力,因为它最终可能实现饮食评估的自动化。在本文中,我们研究了从图像中分割食物对象的问题。由于食物类型、形状和颜色各异,食物容器上有不同的装饰图案,以及食物与非食物对象之间的遮挡,这种分割很困难。我们提出了一种基于显著度感知主动轮廓模型(ACM)的新颖方法,用于从可穿戴相机获取的图像中自动分割食物。设计了一种基于食物位置先验和视觉注意力特征的综合显著度估计方法,以生成输入图像中可能的食物区域的显著度图。接下来,通过对一组仿射和弹性变换参数进行多分辨率优化,生成几何轮廓基元并将其拟合到显著度图上。然后在轮廓拟合后提取食物区域。我们使用60张食物图像进行的实验表明,与传统分割方法相比,该方法在食物分割方面取得了显著更高的准确率。