Suppr超能文献

训练计算机识别与体力活动相关的建成环境:使用计算机视觉检测微观步行环境特征。

Training Computers to See the Built Environment Related to Physical Activity: Detection of Microscale Walkability Features Using Computer Vision.

机构信息

College of Health Solutions, Arizona State University, Phoenix, AZ 85004, USA.

Department of Psychology, Clemson University, Clemson, SC 29634, USA.

出版信息

Int J Environ Res Public Health. 2022 Apr 9;19(8):4548. doi: 10.3390/ijerph19084548.

Abstract

The study purpose was to train and validate a deep learning approach to detect microscale streetscape features related to pedestrian physical activity. This work innovates by combining computer vision techniques with Google Street View (GSV) images to overcome impediments to conducting audits (e.g., time, safety, and expert labor cost). The EfficientNETB5 architecture was used to build deep learning models for eight microscale features guided by the Microscale Audit of Pedestrian Streetscapes Mini tool: sidewalks, sidewalk buffers, curb cuts, zebra and line crosswalks, walk signals, bike symbols, and streetlights. We used a train−correct loop, whereby images were trained on a training dataset, evaluated using a separate validation dataset, and trained further until acceptable performance metrics were achieved. Further, we used trained models to audit participant (N = 512) neighborhoods in the WalkIT Arizona trial. Correlations were explored between microscale features and GIS-measured and participant-reported neighborhood macroscale walkability. Classifier precision, recall, and overall accuracy were all over >84%. Total microscale was associated with overall macroscale walkability (r = 0.30, p < 0.001). Positive associations were found between model-detected and self-reported sidewalks (r = 0.41, p < 0.001) and sidewalk buffers (r = 0.26, p < 0.001). The computer vision model results suggest an alternative to trained human raters, allowing for audits of hundreds or thousands of neighborhoods for population surveillance or hypothesis testing.

摘要

本研究旨在训练和验证一种深度学习方法,以检测与行人身体活动相关的微观街道特征。这项工作的创新之处在于结合计算机视觉技术和谷歌街景(GSV)图像,以克服进行审计的障碍(例如,时间、安全和专家劳动力成本)。该研究使用 EfficientNETB5 架构,根据行人街道微观审计迷你工具(Mini tool)构建了八个微观特征的深度学习模型:人行道、人行道缓冲区、路缘石开口、斑马和线条人行横道、行人信号、自行车标志和路灯。我们使用了训练-校正循环,即将图像在训练数据集上进行训练,在单独的验证数据集上进行评估,并进一步训练,直到达到可接受的性能指标。此外,我们还使用训练好的模型对参与亚利桑那州步行(WalkIT)试验的参与者(N=512)的社区进行了审计。探索了微观特征与 GIS 测量和参与者报告的社区宏观步行能力之间的相关性。分类器的精度、召回率和整体准确性均超过 84%。总微观特征与整体宏观步行能力呈正相关(r=0.30,p<0.001)。模型检测到的和自我报告的人行道(r=0.41,p<0.001)和人行道缓冲区(r=0.26,p<0.001)之间存在正相关关系。计算机视觉模型的结果表明,它可以替代受过训练的人类评估员,从而可以对数百或数千个社区进行审计,以进行人群监测或假设检验。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ae2c/9028816/b5b5277456db/ijerph-19-04548-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验