• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度人体解析与主动模板回归。

Deep Human Parsing with Active Template Regression.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2015 Dec;37(12):2402-14. doi: 10.1109/TPAMI.2015.2408360.

DOI:10.1109/TPAMI.2015.2408360
PMID:26539846
Abstract

In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches 64.38 percent by our ATR framework, significantly higher than 44.76 percent based on the state-of-the-art algorithm [28].

摘要

在这项工作中,人体解析任务,即将人体图像分解为语义时尚/身体区域,被表述为主动模板回归(ATR)问题,其中每个时尚/身体项的归一化掩模被表示为学习掩模模板的线性组合,然后通过主动形状参数进行变形,以获得更精确的掩模,这些参数包括每个语义区域的位置、比例和可见性。掩模模板系数和主动形状参数一起可以生成人体解析结果,因此被称为人体解析的结构输出。深度卷积神经网络(CNN)被用于建立输入人体图像和人体解析结构输出之间的端到端关系。更具体地说,结构输出由两个单独的网络来预测。第一个 CNN 网络具有最大池化,旨在预测每个标签掩模的模板系数,而第二个 CNN 网络没有最大池化,以保留对标签掩模位置的敏感性,并准确预测主动形状参数。对于新图像,两个网络的结构输出被融合以生成每个像素的每个标签的概率,最后使用超像素平滑来细化人体解析结果。在一个大型数据集上的综合评估很好地证明了 ATR 框架在人体解析方面明显优于其他最先进技术的优越性。特别是,我们的 ATR 框架的 F1 分数达到 64.38%,显著高于基于最先进算法[28]的 44.76%。

相似文献

1
Deep Human Parsing with Active Template Regression.深度人体解析与主动模板回归。
IEEE Trans Pattern Anal Mach Intell. 2015 Dec;37(12):2402-14. doi: 10.1109/TPAMI.2015.2408360.
2
Human Parsing with Contextualized Convolutional Neural Network.基于上下文卷积神经网络的人体解析
IEEE Trans Pattern Anal Mach Intell. 2017 Jan;39(1):115-127. doi: 10.1109/TPAMI.2016.2537339. Epub 2016 Mar 2.
3
Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments.Human3.6M:自然环境中 3D 人体感应的大规模数据集和预测方法。
IEEE Trans Pattern Anal Mach Intell. 2014 Jul;36(7):1325-39. doi: 10.1109/TPAMI.2013.248.
4
Bodypart Recognition Using Multi-stage Deep Learning.使用多阶段深度学习的身体部位识别
Inf Process Med Imaging. 2015;24:449-61. doi: 10.1007/978-3-319-19992-4_35.
5
Tracking people on a torus.在环面上追踪人员。
IEEE Trans Pattern Anal Mach Intell. 2009 Mar;31(3):520-38. doi: 10.1109/TPAMI.2008.101.
6
Automatic construction of active appearance models as an image coding problem.作为图像编码问题的主动外观模型自动构建
IEEE Trans Pattern Anal Mach Intell. 2004 Oct;26(10):1380-4. doi: 10.1109/TPAMI.2004.77.
7
Learning Actionlet Ensemble for 3D Human Action Recognition.学习动作单元集以进行 3D 人体动作识别。
IEEE Trans Pattern Anal Mach Intell. 2014 May;36(5):914-27. doi: 10.1109/TPAMI.2013.198.
8
Clutter invariant ATR.杂波不变自动目标识别
IEEE Trans Pattern Anal Mach Intell. 2005 May;27(5):817-21. doi: 10.1109/TPAMI.2005.97.
9
Human identification using temporal information preserving gait template.利用时间信息保持步态模板进行人体识别。
IEEE Trans Pattern Anal Mach Intell. 2012 Nov;34(11):2164-76. doi: 10.1109/TPAMI.2011.260.
10
Silhouette analysis for human action recognition based on supervised temporal t-SNE and incremental learning.基于监督时间 t-SNE 和增量学习的人体动作识别轮廓分析。
IEEE Trans Image Process. 2015 Oct;24(10):3203-17. doi: 10.1109/TIP.2015.2441634.

引用本文的文献

1
A Universal Decoupled Training Framework for Human Parsing.一种用于人体解析的通用解耦训练框架。
Sensors (Basel). 2022 Aug 9;22(16):5964. doi: 10.3390/s22165964.