AI Grand ICT Center, Dong-Eui University, Busan 47340, Republic of Korea.
Department of Computer Software Engineering, Dong-Eui University, Busan 47340, Republic of Korea.
Sensors (Basel). 2024 Mar 22;24(7):2044. doi: 10.3390/s24072044.
In this paper, we propose an amount estimation method for food intake based on both color and depth images. Two pairs of color and depth images are captured pre- and post-meals. The pre- and post-meal color images are employed to detect food types and food existence regions using Mask R-CNN. The post-meal color image is spatially transformed to match the food region locations between the pre- and post-meal color images. The same transformation is also performed on the post-meal depth image. The pixel values of the post-meal depth image are compensated to reflect 3D position changes caused by the image transformation. In both the pre- and post-meal depth images, a space volume for each food region is calculated by dividing the space between the food surfaces and the camera into multiple tetrahedra. The food intake amounts are estimated as the difference in space volumes calculated from the pre- and post-meal depth images. From the simulation results, we verify that the proposed method estimates the food intake amount with an error of up to 2.2%.
本文提出了一种基于彩色和深度图像的食物摄入量估计方法。在餐前和餐后分别采集一对彩色和深度图像。使用 Mask R-CNN 对餐前和餐后的彩色图像进行检测,以识别食物类型和食物存在区域。对餐后彩色图像进行空间变换,以匹配餐前和餐后彩色图像中食物区域的位置。对餐后深度图像也执行相同的变换。对餐后深度图像的像素值进行补偿,以反映图像变换引起的 3D 位置变化。在餐前和餐后深度图像中,通过将食物表面与相机之间的空间划分为多个四面体,为每个食物区域计算一个空间体积。通过从餐前和餐后深度图像计算空间体积的差异来估计食物摄入量。从模拟结果中可以验证,该方法可以估计食物摄入量,误差不超过 2.2%。