Hassan M A, Sazonov E
Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, Al, 35401 USA.
IEEE Access. 2020;8:198615-198623. doi: 10.1109/access.2020.3030723. Epub 2020 Oct 13.
Automatic Ingestion Monitor v2 (AIM-2) is an egocentric camera and sensor that aids monitoring of individual diet and eating behavior by capturing still images throughout the day and using sensor data to detect eating. The images may be used to recognize foods being eaten, eating environment, and other behaviors and daily activities. At the same time, captured images may carry privacy concerning content such as (1) people in social eating and/or bystanders (i.e., bystander privacy); (2) sensitive documents that may appear on a computer screen in the view of AIM-2 (i.e., context privacy). In this paper, we propose a novel approach based on automatic, image redaction for privacy protection by selective content removal by semantic segmentation using a deep learning neural network. The proposed method reported a bystander privacy removal with precision of 0.87 and recall of 0.94 and reported context privacy removal by precision and recall of 0.97 and 0.98. The results of the study showed that selective content removal using deep learning neural network is a much more desirable approach to address privacy concerns for an egocentric wearable camera for nutritional studies.
自动摄入监测器v2(AIM-2)是一种以自我为中心的摄像头和传感器,通过全天拍摄静止图像并利用传感器数据来检测进食情况,辅助监测个人饮食和进食行为。这些图像可用于识别所吃的食物、进食环境以及其他行为和日常活动。与此同时,拍摄的图像可能包含隐私相关内容,例如:(1)社交用餐中的人员和/或旁观者(即旁观者隐私);(2)在AIM-2视野范围内可能出现在电脑屏幕上的敏感文件(即上下文隐私)。在本文中,我们提出了一种基于自动图像编辑的新颖方法,通过使用深度学习神经网络进行语义分割来选择性地去除内容,以保护隐私。所提出的方法实现了旁观者隐私去除的精度为0.87,召回率为0.94,上下文隐私去除的精度和召回率分别为0.97和0.98。研究结果表明,使用深度学习神经网络进行选择性内容去除是一种更可取的方法,可解决用于营养研究的以自我为中心的可穿戴摄像头的隐私问题。