• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

从强度和颜色图像重建超二次曲面。

Reconstructing Superquadrics from Intensity and Color Images.

机构信息

Faculty of Computer and Information Science, University of Ljubljana, 1000 Ljubljana, Slovenia.

Faculty of Electrical Engineering, University of Ljubljana, 1000 Ljubljana, Slovenia.

出版信息

Sensors (Basel). 2022 Jul 16;22(14):5332. doi: 10.3390/s22145332.

DOI:10.3390/s22145332
PMID:35891011
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9319097/
Abstract

The task of reconstructing 3D scenes based on visual data represents a longstanding problem in computer vision. Common reconstruction approaches rely on the use of multiple volumetric primitives to describe complex objects. Superquadrics (a class of volumetric primitives) have shown great promise due to their ability to describe various shapes with only a few parameters. Recent research has shown that deep learning methods can be used to accurately reconstruct random superquadrics from both 3D point cloud data and simple depth images. In this paper, we extended these reconstruction methods to intensity and color images. Specifically, we used a dedicated convolutional neural network (CNN) model to reconstruct a single superquadric from the given input image. We analyzed the results in a qualitative and quantitative manner, by visualizing reconstructed superquadrics as well as observing error and accuracy distributions of predictions. We showed that a CNN model designed around a simple ResNet backbone can be used to accurately reconstruct superquadrics from images containing one object, but only if one of the spatial parameters is fixed or if it can be determined from other image characteristics, e.g., shadows. Furthermore, we experimented with images of increasing complexity, for example, by adding textures, and observed that the results degraded only slightly. In addition, we show that our model outperforms the current state-of-the-art method on the studied task. Our final result is a highly accurate superquadric reconstruction model, which can also reconstruct superquadrics from real images of simple objects, without additional training.

摘要

基于视觉数据进行 3D 场景重建是计算机视觉中的一个长期存在的问题。常见的重建方法依赖于使用多个体积元来描述复杂物体。超二次曲面(一类体积元)由于仅用几个参数就能描述各种形状,因此具有很大的潜力。最近的研究表明,深度学习方法可以用于从 3D 点云数据和简单的深度图像中准确地重建随机超二次曲面。在本文中,我们将这些重建方法扩展到强度和颜色图像。具体来说,我们使用专门的卷积神经网络(CNN)模型从给定的输入图像中重建单个超二次曲面。我们通过可视化重建的超二次曲面以及观察预测误差和准确性分布,以定性和定量的方式分析结果。我们表明,围绕简单 ResNet 骨干网络设计的 CNN 模型可以从包含一个物体的图像中准确地重建超二次曲面,但前提是其中一个空间参数是固定的,或者可以从其他图像特征(例如阴影)中确定。此外,我们还对越来越复杂的图像进行了实验,例如添加纹理,并观察到结果仅略有下降。此外,我们表明,我们的模型在研究任务上优于当前的最先进方法。我们的最终结果是一个高度准确的超二次曲面重建模型,它还可以从简单物体的真实图像中重建超二次曲面,而无需额外的训练。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/5fc58551692c/sensors-22-05332-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/98f2a97acfa6/sensors-22-05332-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/fb819e27b30c/sensors-22-05332-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/d584d696fe9b/sensors-22-05332-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/9987dd8b7a4a/sensors-22-05332-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/e873fcfe93ac/sensors-22-05332-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/9372a93136ac/sensors-22-05332-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/f8ab494d3022/sensors-22-05332-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/1b0c52a98ea2/sensors-22-05332-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/a3b65bfe00f1/sensors-22-05332-g009a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/29b93f46e710/sensors-22-05332-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/7f535856ca37/sensors-22-05332-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/bc7f3dd2ca5a/sensors-22-05332-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/5fc58551692c/sensors-22-05332-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/98f2a97acfa6/sensors-22-05332-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/fb819e27b30c/sensors-22-05332-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/d584d696fe9b/sensors-22-05332-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/9987dd8b7a4a/sensors-22-05332-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/e873fcfe93ac/sensors-22-05332-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/9372a93136ac/sensors-22-05332-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/f8ab494d3022/sensors-22-05332-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/1b0c52a98ea2/sensors-22-05332-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/a3b65bfe00f1/sensors-22-05332-g009a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/29b93f46e710/sensors-22-05332-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/7f535856ca37/sensors-22-05332-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/bc7f3dd2ca5a/sensors-22-05332-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/9319097/5fc58551692c/sensors-22-05332-g013.jpg

相似文献

1
Reconstructing Superquadrics from Intensity and Color Images.从强度和颜色图像重建超二次曲面。
Sensors (Basel). 2022 Jul 16;22(14):5332. doi: 10.3390/s22145332.
2
Accurate Hand Detection from Single-Color Images by Reconstructing Hand Appearances.基于手部表观重构的单彩色图像中手部的精确检测
Sensors (Basel). 2019 Dec 29;20(1):192. doi: 10.3390/s20010192.
3
Image-Based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era.基于图像的 3D 目标重建:深度学习时代的现状与趋势。
IEEE Trans Pattern Anal Mach Intell. 2021 May;43(5):1578-1604. doi: 10.1109/TPAMI.2019.2954885. Epub 2021 Apr 1.
4
DeepOrganNet: On-the-Fly Reconstruction and Visualization of 3D / 4D Lung Models from Single-View Projections by Deep Deformation Network.DeepOrganNet:基于深度变形网络的单视图投影的三维/四维肺部模型的实时重建和可视化。
IEEE Trans Vis Comput Graph. 2020 Jan;26(1):960-970. doi: 10.1109/TVCG.2019.2934369. Epub 2019 Aug 22.
5
Three-dimensional convolutional neural networks for simultaneous dual-tracer PET imaging.用于同时双示踪剂 PET 成像的三维卷积神经网络。
Phys Med Biol. 2019 Sep 19;64(18):185016. doi: 10.1088/1361-6560/ab3103.
6
Histopathological image recognition of breast cancer based on three-channel reconstructed color slice feature fusion.基于三通道重建彩色切片特征融合的乳腺癌组织病理学图像识别
Biochem Biophys Res Commun. 2022 Sep 3;619:159-165. doi: 10.1016/j.bbrc.2022.06.004. Epub 2022 Jun 11.
7
VBNet: An end-to-end 3D neural network for vessel bifurcation point detection in mesoscopic brain images.VBNet:一种用于介观脑图像中血管分叉点检测的端到端 3D 神经网络。
Comput Methods Programs Biomed. 2022 Feb;214:106567. doi: 10.1016/j.cmpb.2021.106567. Epub 2021 Dec 2.
8
Synthesizing images from multiple kernels using a deep convolutional neural network.使用深度卷积神经网络从多个内核合成图像。
Med Phys. 2020 Feb;47(2):422-430. doi: 10.1002/mp.13918. Epub 2019 Dec 29.
9
Short-wave infrared polarimetric image reconstruction using a deep convolutional neural network based on a high-frequency correlation.基于高频相关的深度卷积神经网络的短波红外偏振图像重建
Appl Opt. 2022 Aug 20;61(24):7163-7172. doi: 10.1364/AO.460752.
10
Automatic recognition of holistic functional brain networks using iteratively optimized convolutional neural networks (IO-CNN) with weak label initialization.利用具有弱标签初始化的迭代优化卷积神经网络(IO-CNN)自动识别整体功能脑网络。
Med Image Anal. 2018 Jul;47:111-126. doi: 10.1016/j.media.2018.04.002.

引用本文的文献

1
Dimensioning Cuboid and Cylindrical Objects Using Only Noisy and Partially Observed Time-of-Flight Data.仅使用噪声和部分观测到的飞行时间数据对长方体和圆柱形物体进行尺寸标注
Sensors (Basel). 2023 Oct 24;23(21):8673. doi: 10.3390/s23218673.

本文引用的文献

1
Learning ambidextrous robot grasping policies.学习双手机器人抓取策略。
Sci Robot. 2019 Jan 16;4(26). doi: 10.1126/scirobotics.aau4984.
2
A CNN Regression Approach for Real-Time 2D/3D Registration.一种用于实时 2D/3D 配准的 CNN 回归方法。
IEEE Trans Med Imaging. 2016 May;35(5):1352-1363. doi: 10.1109/TMI.2016.2521800. Epub 2016 Jan 26.