• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用可穿戴自我中心相机进行实时食物摄入量监测。

Real-Time Food Intake Monitoring Using Wearable Egocnetric Camera.

作者信息

Hossain Delwar, Imtiaz Masudul Haider, Ghosh Tonmoy, Bhaskar Viprav, Sazonov Edward

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:4191-4195. doi: 10.1109/EMBC44109.2020.9175497.

DOI:10.1109/EMBC44109.2020.9175497
PMID:33018921
Abstract

With technological advancement, wearable egocentric camera systems have extensively been studied to develop food intake monitoring devices for the assessment of eating behavior. This paper provides a detailed description of the implementation of CNN based image classifier in the Cortex-M7 microcontroller. The proposed network classifies the captured images by the wearable egocentric camera as food and no food images in real-time. This real-time food image detection can potentially lead the monitoring devices to consume less power, less storage, and more user-friendly in terms of privacy by saving only images that are detected as food images. A derivative of pre-trained MobileNet is trained to detect food images from camera captured images. The proposed network needs 761.99KB of flash and 501.76KB of RAM to implement which is built for an optimal trade-off between accuracy, computational cost, and memory footprint considering implementation on a Cortex-M7 microcontroller. The image classifier achieved an average precision of 82%±3% and an average F-score of 74%±2% while testing on 15343 (2127 food images and 13216 no food images) images of five full days collected from five participants.

摘要

随着技术的进步,可穿戴自我中心相机系统已被广泛研究,以开发用于评估饮食行为的食物摄入量监测设备。本文详细描述了基于卷积神经网络(CNN)的图像分类器在Cortex-M7微控制器中的实现。所提出的网络将可穿戴自我中心相机捕获的图像实时分类为食物图像和非食物图像。这种实时食物图像检测可能会使监测设备消耗更少的电力、占用更少的存储空间,并且在隐私方面更用户友好,因为只保存被检测为食物图像的图像。对预训练的MobileNet的一个衍生版本进行训练,以从相机捕获的图像中检测食物图像。考虑到在Cortex-M7微控制器上的实现,所提出的网络需要761.99KB的闪存和501.76KB的随机存取存储器来实现,其构建是为了在准确性、计算成本和内存占用之间进行优化权衡。在对从五名参与者收集的五个完整日的15343张(2127张食物图像和13216张非食物图像)图像进行测试时,该图像分类器的平均精度为82%±3%,平均F值为74%±2%。

相似文献

1
Real-Time Food Intake Monitoring Using Wearable Egocnetric Camera.使用可穿戴自我中心相机进行实时食物摄入量监测。
Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:4191-4195. doi: 10.1109/EMBC44109.2020.9175497.
2
Food Detection and Segmentation from Egocentric Camera Images.从自拍相机图像中进行食物检测和分割。
Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:2736-2740. doi: 10.1109/EMBC46164.2021.9630823.
3
"Automatic Ingestion Monitor Version 2" - A Novel Wearable Device for Automatic Food Intake Detection and Passive Capture of Food Images."Automatic Ingestion Monitor Version 2" - 一种新型可穿戴设备,用于自动检测食物摄入和被动捕获食物图像。
IEEE J Biomed Health Inform. 2021 Feb;25(2):568-576. doi: 10.1109/JBHI.2020.2995473. Epub 2021 Feb 5.
4
Selective Content Removal for Egocentric Wearable Camera in Nutritional Studies.营养研究中用于以自我为中心的可穿戴相机的选择性内容去除
IEEE Access. 2020;8:198615-198623. doi: 10.1109/access.2020.3030723. Epub 2020 Oct 13.
5
Egocentric Image Captioning for Privacy-Preserved Passive Dietary Intake Monitoring.自我中心图像标注用于隐私保护的被动饮食摄入监测。
IEEE Trans Cybern. 2024 Feb;54(2):679-692. doi: 10.1109/TCYB.2023.3243999. Epub 2024 Jan 17.
6
Counting Bites With Bits: Expert Workshop Addressing Calorie and Macronutrient Intake Monitoring.用数字计算摄入量:解决卡路里和宏量营养素摄入监测问题的专家研讨会
J Med Internet Res. 2019 Dec 4;21(12):e14904. doi: 10.2196/14904.
7
A pilot study to determine whether using a lightweight, wearable micro-camera improves dietary assessment accuracy and offers information on macronutrients and eating rate.一项初步研究,旨在确定使用轻便的可穿戴微型摄像头是否能提高饮食评估的准确性,并提供有关宏量营养素和进食速度的信息。
Br J Nutr. 2016 Jan 14;115(1):160-7. doi: 10.1017/S0007114515004262. Epub 2015 Nov 5.
8
Using wearable cameras to monitor eating and drinking behaviours during transport journeys.使用可穿戴摄像机在运输过程中监测饮食行为。
Eur J Nutr. 2021 Jun;60(4):1875-1885. doi: 10.1007/s00394-020-02380-4. Epub 2020 Sep 4.
9
Orientation-Based Food Image Capture for Head Mounted Egocentric Camera.用于头戴式自我中心相机的基于方向的食物图像捕捉
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:7145-7148. doi: 10.1109/EMBC.2019.8857078.
10
Wearable Egocentric Camera as a Monitoring Tool of Free-Living Cigarette Smoking: A Feasibility Study.可穿戴自我中心相机作为自由生活吸烟监测工具的可行性研究。
Nicotine Tob Res. 2020 Oct 8;22(10):1883-1890. doi: 10.1093/ntr/ntz208.

引用本文的文献

1
Vision-Based Methods for Food and Fluid Intake Monitoring: A Literature Review.基于视觉的食物和液体摄入监测方法:文献综述。
Sensors (Basel). 2023 Jul 4;23(13):6137. doi: 10.3390/s23136137.
2
Capturing children food exposure using wearable cameras and deep learning.使用可穿戴摄像头和深度学习技术捕捉儿童的食物摄入量。
PLOS Digit Health. 2023 Mar 27;2(3):e0000211. doi: 10.1371/journal.pdig.0000211. eCollection 2023 Mar.
3
Social Environmental Predictors of Lapse in Dietary Behavior: An Ecological Momentary Assessment Study Amongst Dutch Adults Trying to Lose Weight.
社会环境因素对饮食行为变化的预测:荷兰成年人减肥过程中的生态瞬时评估研究。
Ann Behav Med. 2023 Jul 19;57(8):620-629. doi: 10.1093/abm/kaac077.
4
Fight Fire with Fire: Detecting Forest Fires with Embedded Machine Learning Models Dealing with Audio and Images on Low Power IoT Devices.以火攻火:在低功耗物联网设备上使用嵌入式机器学习模型检测音频和图像中的森林火灾。
Sensors (Basel). 2023 Jan 10;23(2):783. doi: 10.3390/s23020783.
5
Machine learning modeling practices to support the principles of AI and ethics in nutrition research.支持营养研究中人工智能和伦理原则的机器学习建模实践。
Nutr Diabetes. 2022 Dec 2;12(1):48. doi: 10.1038/s41387-022-00226-y.
6
Thought on Food: A Systematic Review of Current Approaches and Challenges for Food Intake Detection.关于食物的思考:食物摄入检测当前方法及挑战的系统评价
Sensors (Basel). 2022 Aug 26;22(17):6443. doi: 10.3390/s22176443.
7
Food/Non-Food Classification of Real-Life Egocentric Images in Low- and Middle-Income Countries Based on Image Tagging Features.基于图像标记特征的低收入和中等收入国家现实生活中以自我为中心图像的食物/非食物分类
Front Artif Intell. 2021 Apr 1;4:644712. doi: 10.3389/frai.2021.644712. eCollection 2021.