• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Agrast-6:基于 VGG 的简化反射轻量级架构,用于 Kinect 采集的深度图像的二进制分割。

Agrast-6: Abridged VGG-Based Reflected Lightweight Architecture for Binary Segmentation of Depth Images Captured by Kinect.

机构信息

Faculty of Informatics, Kaunas University of Technology, 44249 Kaunas, Lithuania.

出版信息

Sensors (Basel). 2022 Aug 24;22(17):6354. doi: 10.3390/s22176354.

DOI:10.3390/s22176354
PMID:36080813
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9460068/
Abstract

Binary object segmentation is a sub-area of semantic segmentation that could be used for a variety of applications. Semantic segmentation models could be applied to solve binary segmentation problems by introducing only two classes, but the models to solve this problem are more complex than actually required. This leads to very long training times, since there are usually tens of millions of parameters to learn in this category of convolutional neural networks (CNNs). This article introduces a novel abridged VGG-16 and SegNet-inspired reflected architecture adapted for binary segmentation tasks. The architecture has 27 times fewer parameters than SegNet but yields 86% segmentation cross-intersection accuracy and 93% binary accuracy. The proposed architecture is evaluated on a large dataset of depth images collected using the Kinect device, achieving an accuracy of 99.25% in human body shape segmentation and 87% in gender recognition tasks.

摘要

二进制对象分割是语义分割的一个子领域,可用于各种应用。语义分割模型可以通过仅引入两个类别来应用于解决二进制分割问题,但解决此问题的模型比实际需要的更复杂。这导致训练时间非常长,因为在这类卷积神经网络 (CNN) 中通常有数千万个参数需要学习。本文介绍了一种新颖的简化 VGG-16 和受 SegNet 启发的反射架构,适用于二进制分割任务。该架构的参数比 SegNet 少 27 倍,但在分割交叉点精度上达到 86%,二进制精度达到 93%。所提出的架构在使用 Kinect 设备收集的深度图像的大型数据集上进行了评估,在人体形状分割任务中达到了 99.25%的精度,在性别识别任务中达到了 87%的精度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/2dd6061f9593/sensors-22-06354-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/8000c7fe91e4/sensors-22-06354-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/cca8e112db80/sensors-22-06354-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/72d4af800249/sensors-22-06354-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/2d8cab3bbdff/sensors-22-06354-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/0e54f5f72519/sensors-22-06354-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/3f086dcc5bc7/sensors-22-06354-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/d5cb5c715361/sensors-22-06354-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/025c7d51ac25/sensors-22-06354-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/2dd6061f9593/sensors-22-06354-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/8000c7fe91e4/sensors-22-06354-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/cca8e112db80/sensors-22-06354-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/72d4af800249/sensors-22-06354-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/2d8cab3bbdff/sensors-22-06354-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/0e54f5f72519/sensors-22-06354-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/3f086dcc5bc7/sensors-22-06354-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/d5cb5c715361/sensors-22-06354-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/025c7d51ac25/sensors-22-06354-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/815d/9460068/2dd6061f9593/sensors-22-06354-g009.jpg

相似文献

1
Agrast-6: Abridged VGG-Based Reflected Lightweight Architecture for Binary Segmentation of Depth Images Captured by Kinect.Agrast-6:基于 VGG 的简化反射轻量级架构,用于 Kinect 采集的深度图像的二进制分割。
Sensors (Basel). 2022 Aug 24;22(17):6354. doi: 10.3390/s22176354.
2
Depth Density Achieves a Better Result for Semantic Segmentation with the Kinect System.深度密度可通过 Kinect 系统实现更好的语义分割效果。
Sensors (Basel). 2020 Feb 3;20(3):812. doi: 10.3390/s20030812.
3
A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound.用于超声乳腺肿瘤语义分割的预训练卷积神经网络的比较研究
Comput Biol Med. 2020 Nov;126:104036. doi: 10.1016/j.compbiomed.2020.104036. Epub 2020 Oct 8.
4
VGG-UNet/VGG-SegNet Supported Automatic Segmentation of Endoplasmic Reticulum Network in Fluorescence Microscopy Images.VGG-UNet/VGG-SegNet 支持荧光显微镜图像中内质网网络的自动分割。
Scanning. 2022 Jun 8;2022:7733860. doi: 10.1155/2022/7733860. eCollection 2022.
5
A Lightweight Semantic Segmentation Algorithm Based on Deep Convolutional Neural Networks.基于深度卷积神经网络的轻量级语义分割算法。
Comput Intell Neurosci. 2022 Sep 6;2022:5339664. doi: 10.1155/2022/5339664. eCollection 2022.
6
Multi-Scale Squeeze U-SegNet with Multi Global Attention for Brain MRI Segmentation.多尺度挤压 U-Net 与多全局注意力融合的脑 MRI 分割方法
Sensors (Basel). 2021 May 12;21(10):3363. doi: 10.3390/s21103363.
7
Automated polyp segmentation for colonoscopy images: A method based on convolutional neural networks and ensemble learning.结肠镜图像的自动息肉分割:一种基于卷积神经网络和集成学习的方法。
Med Phys. 2019 Dec;46(12):5666-5676. doi: 10.1002/mp.13865. Epub 2019 Oct 31.
8
Hand and Object Segmentation from Depth Image using Fully Convolutional Network.基于全卷积网络的深度图像手部与物体分割
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:2082-2086. doi: 10.1109/EMBC.2019.8857700.
9
Improving Depth Estimation by Embedding Semantic Segmentation: A Hybrid CNN Model.通过嵌入语义分割来提高深度估计:一种混合 CNN 模型。
Sensors (Basel). 2022 Feb 21;22(4):1669. doi: 10.3390/s22041669.
10
Glomerulosclerosis identification in whole slide images using semantic segmentation.使用语义分割识别全切片图像中的肾小球硬化。
Comput Methods Programs Biomed. 2020 Feb;184:105273. doi: 10.1016/j.cmpb.2019.105273. Epub 2019 Dec 19.

本文引用的文献

1
Computer-Aided Depth Video Stream Masking Framework for Human Body Segmentation in Depth Sensor Images.基于深度传感器图像的人体分割的计算机辅助深度视频流掩蔽框架。
Sensors (Basel). 2022 May 6;22(9):3531. doi: 10.3390/s22093531.
2
Image encryption scheme based on alternate quantum walks and discrete cosine transform.基于交替量子游走和离散余弦变换的图像加密方案。
Opt Express. 2021 Aug 30;29(18):28338-28351. doi: 10.1364/OE.431945.
3
HUMANNET-A Two-Tiered Deep Neural Network Architecture for Self-Occluding Humanoid Pose Reconstruction.
用于自遮挡人形姿态重建的双层深度神经网络架构。
Sensors (Basel). 2021 Jun 8;21(12):3945. doi: 10.3390/s21123945.
4
SoftSeg: Advantages of soft versus binary training for image segmentation.SoftSeg:软训练与二值训练在图像分割方面的优势。
Med Image Anal. 2021 Jul;71:102038. doi: 10.1016/j.media.2021.102038. Epub 2021 Mar 18.
5
Modified U-Net (mU-Net) With Incorporation of Object-Dependent High Level Features for Improved Liver and Liver-Tumor Segmentation in CT Images.结合目标相关高级特征的改进型U-Net(mU-Net)用于增强CT图像中的肝脏和肝肿瘤分割
IEEE Trans Med Imaging. 2020 May;39(5):1316-1325. doi: 10.1109/TMI.2019.2948320. Epub 2019 Oct 18.
6
Unifying obstacle detection, recognition, and fusion based on millimeter wave radar and RGB-depth sensors for the visually impaired.基于毫米波雷达和RGB深度传感器的视障人士统一障碍物检测、识别与融合
Rev Sci Instrum. 2019 Apr;90(4):044102. doi: 10.1063/1.5093279.
7
Full-body motion assessment: Concurrent validation of two body tracking depth sensors versus a gold standard system during gait.全身运动评估:两种基于深度传感器的身体跟踪系统与步态黄金标准系统的同步验证。
J Biomech. 2019 Apr 18;87:189-196. doi: 10.1016/j.jbiomech.2019.03.008. Epub 2019 Mar 18.
8
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.SegNet:一种用于图像分割的深度卷积编解码器架构。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615. Epub 2017 Jan 2.
9
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.空间金字塔池化在深度卷积网络中的视觉识别。
IEEE Trans Pattern Anal Mach Intell. 2015 Sep;37(9):1904-16. doi: 10.1109/TPAMI.2015.2389824.
10
Design and Evaluation of an Interactive Exercise Coaching System for Older Adults: Lessons Learned.老年人交互式运动指导系统的设计与评估:经验教训
IEEE J Biomed Health Inform. 2016 Jan;20(1):201-12. doi: 10.1109/JBHI.2015.2391671. Epub 2015 Jan 13.