IEEE Trans Image Process. 2022;31:5214-5226. doi: 10.1109/TIP.2022.3193763. Epub 2022 Aug 4.
Recognizing the category and its ingredient composition from food images facilitates automatic nutrition estimation, which is crucial to various health relevant applications, such as nutrition intake management and healthy diet recommendation. Since food is composed of ingredients, discovering ingredient-relevant visual regions can help identify its corresponding category and ingredients. Furthermore, various ingredient relationships like co-occurrence and exclusion are also critical for this task. For that, we propose an ingredient-oriented multi-task food category-ingredient joint learning framework for simultaneous food recognition and ingredient prediction. This framework mainly involves learning an ingredient dictionary for ingredient-relevant visual region discovery and building an ingredient-based semantic-visual graph for ingredient relationship modeling. To obtain ingredient-relevant visual regions, we build an ingredient dictionary to capture multiple ingredient regions and obtain the corresponding assignment map, and then pool the region features belonging to the same ingredient to identify the ingredients more accurately and meanwhile improve the classification performance. For ingredient-relationship modeling, we utilize the visual ingredient representations as nodes and the semantic similarity between ingredient embeddings as edges to construct an ingredient graph, and then learn their relationships via the graph convolutional network to make label embeddings and visual features interact with each other to improve the performance. Finally, fused features from both ingredient-oriented region features and ingredient-relationship features are used in the following multi-task category-ingredient joint learning. Extensive evaluation on three popular benchmark datasets (ETH Food-101, Vireo Food-172 and ISIA Food-200) demonstrates the effectiveness of our method. Further visualization of ingredient assignment maps and attention maps also shows the superiority of our method.
从食物图像中识别类别及其成分组成有助于自动营养估计,这对于各种与健康相关的应用至关重要,例如营养摄入管理和健康饮食推荐。由于食物是由成分组成的,发现与成分相关的视觉区域可以帮助识别其对应的类别和成分。此外,各种成分关系,如共同出现和排除,对于这项任务也很关键。为此,我们提出了一种面向成分的多任务食物类别-成分联合学习框架,用于同时进行食物识别和成分预测。该框架主要涉及学习成分词典以发现与成分相关的视觉区域,以及构建基于成分的语义-视觉图以对成分关系进行建模。为了获得与成分相关的视觉区域,我们构建了一个成分词典来捕获多个成分区域,并获得相应的分配图,然后汇集属于同一成分的区域特征,以更准确地识别成分,同时提高分类性能。对于成分关系建模,我们将视觉成分表示作为节点,将成分嵌入的语义相似性作为边,构建一个成分图,然后通过图卷积网络学习它们的关系,使标签嵌入和视觉特征相互作用,以提高性能。最后,从面向成分的区域特征和成分关系特征融合的特征用于后续的多任务类别-成分联合学习。在三个流行的基准数据集(ETH Food-101、Vireo Food-172 和 ISIA Food-200)上进行的广泛评估表明了我们方法的有效性。成分分配图和注意力图的进一步可视化也显示了我们方法的优越性。