Suppr超能文献

一种从单一视角分析全身表面积的框架。

A Framework for Analyzing the Whole Body Surface Area from a Single View.

作者信息

Piccirilli Marco, Doretto Gianfranco, Adjeroh Donald

机构信息

Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26506, United States of America.

出版信息

PLoS One. 2017 Jan 3;12(1):e0166749. doi: 10.1371/journal.pone.0166749. eCollection 2017.

Abstract

We present a virtual reality (VR) framework for the analysis of whole human body surface area. Usual methods for determining the whole body surface area (WBSA) are based on well known formulae, characterized by large errors when the subject is obese, or belongs to certain subgroups. For these situations, we believe that a computer vision approach can overcome these problems and provide a better estimate of this important body indicator. Unfortunately, using machine learning techniques to design a computer vision system able to provide a new body indicator that goes beyond the use of only body weight and height, entails a long and expensive data acquisition process. A more viable solution is to use a dataset composed of virtual subjects. Generating a virtual dataset allowed us to build a population with different characteristics (obese, underweight, age, gender). However, synthetic data might differ from a real scenario, typical of the physician's clinic. For this reason we develop a new virtual environment to facilitate the analysis of human subjects in 3D. This framework can simulate the acquisition process of a real camera, making it easy to analyze and to create training data for machine learning algorithms. With this virtual environment, we can easily simulate the real setup of a clinic, where a subject is standing in front of a camera, or may assume a different pose with respect to the camera. We use this newly designated environment to analyze the whole body surface area (WBSA). In particular, we show that we can obtain accurate WBSA estimations with just one view, virtually enabling the possibility to use inexpensive depth sensors (e.g., the Kinect) for large scale quantification of the WBSA from a single view 3D map.

摘要

我们提出了一种用于分析整个人体表面积的虚拟现实(VR)框架。确定全身表面积(WBSA)的常用方法基于众所周知的公式,当受试者肥胖或属于某些亚组时,这些公式存在较大误差。对于这些情况,我们认为计算机视觉方法可以克服这些问题,并对这一重要身体指标提供更好的估计。不幸的是,使用机器学习技术设计一个能够提供超越仅使用体重和身高的新身体指标的计算机视觉系统,需要漫长且昂贵的数据采集过程。一个更可行的解决方案是使用由虚拟对象组成的数据集。生成虚拟数据集使我们能够构建具有不同特征(肥胖、体重过轻、年龄、性别)的人群。然而,合成数据可能与医生诊所典型的真实场景不同。出于这个原因,我们开发了一个新的虚拟环境,以方便对三维人体对象进行分析。这个框架可以模拟真实相机的采集过程,便于为机器学习算法分析和创建训练数据。有了这个虚拟环境,我们可以轻松模拟诊所的真实设置,即受试者站在相机前,或者相对于相机摆出不同的姿势。我们使用这个新设计的环境来分析全身表面积(WBSA)。特别是,我们表明仅通过一个视角就可以获得准确的WBSA估计值,实际上使得使用廉价的深度传感器(例如Kinect)从单视角三维地图对WBSA进行大规模量化成为可能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/952d/5207503/7540b9069f31/pone.0166749.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验