• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于地标形状编码和连续域稀疏字典学习。

Landmark-Based Shape Encoding and Sparse-Dictionary Learning in the Continuous Domain.

出版信息

IEEE Trans Image Process. 2018 Jan;27(1):365-378. doi: 10.1109/TIP.2017.2762582. Epub 2017 Oct 12.

DOI:10.1109/TIP.2017.2762582
PMID:29028193
Abstract

We provide a generic framework to learn shape dictionaries of landmark-based curves that are defined in the continuous domain. We first present an unbiased alignment method that involves the construction of a mean shape as well as training sets whose elements are subspaces that contain all affine transformations of the training samples. The alignment relies on orthogonal projection operators that have a closed form. We then present algorithms to learn shape dictionaries according to the structure of the data that needs to be encoded: 1) projection-based functional principal-component analysis for homogeneous data and 2) continuous-domain sparse shape encoding to learn dictionaries that contain imbalanced data, outliers, or different types of shape structures. Through parametric spline curves, we provide a detailed and exact implementation of our method. We demonstrate that it requires fewer parameters than purely discrete methods and that it is computationally more efficient and accurate. We illustrate the use of our framework for dictionary learning of structures in biomedical images as well as for shape analysis in bioimaging.

摘要

我们提供了一个通用框架,用于学习基于地标曲线的形状字典,这些曲线是在连续域中定义的。我们首先提出了一种无偏对齐方法,该方法涉及构建平均形状以及训练集,训练集的元素是包含训练样本所有仿射变换的子空间。对齐依赖于具有封闭形式的正交投影算子。然后,我们根据需要编码的数据结构提出了学习形状字典的算法:1)用于同质数据的基于投影的函数主成分分析,以及 2)连续域稀疏形状编码,以学习包含不平衡数据、异常值或不同类型形状结构的字典。通过参数样条曲线,我们详细准确地实现了我们的方法。我们证明它需要比纯离散方法更少的参数,并且在计算上更有效和准确。我们说明了我们的框架在生物医学图像的结构字典学习以及生物成像中的形状分析中的应用。

相似文献

1
Landmark-Based Shape Encoding and Sparse-Dictionary Learning in the Continuous Domain.基于地标形状编码和连续域稀疏字典学习。
IEEE Trans Image Process. 2018 Jan;27(1):365-378. doi: 10.1109/TIP.2017.2762582. Epub 2017 Oct 12.
2
Dictionary learning algorithms for sparse representation.用于稀疏表示的字典学习算法。
Neural Comput. 2003 Feb;15(2):349-96. doi: 10.1162/089976603762552951.
3
Learning Stable Multilevel Dictionaries for Sparse Representations.学习用于稀疏表示的稳定多级字典。
IEEE Trans Neural Netw Learn Syst. 2015 Sep;26(9):1913-26. doi: 10.1109/TNNLS.2014.2361052. Epub 2014 Oct 16.
4
Highly undersampled MR image reconstruction using an improved dual-dictionary learning method with self-adaptive dictionaries.使用具有自适应字典的改进双字典学习方法进行高度欠采样磁共振图像重建。
Med Biol Eng Comput. 2017 May;55(5):807-822. doi: 10.1007/s11517-016-1556-z. Epub 2016 Aug 18.
5
Alternatively Constrained Dictionary Learning For Image Superresolution.替代约束字典学习的图像超分辨率方法。
IEEE Trans Cybern. 2014 Mar;44(3):366-77. doi: 10.1109/TCYB.2013.2256347. Epub 2013 May 2.
6
Image transformation based on learning dictionaries across image spaces.基于跨图像空间学习字典的图像变换。
IEEE Trans Pattern Anal Mach Intell. 2013 Feb;35(2):367-80. doi: 10.1109/TPAMI.2012.95.
7
Compositional Dictionaries for Domain Adaptive Face Recognition.面向域自适应人脸识别的成分字典。
IEEE Trans Image Process. 2015 Dec;24(12):5152-65. doi: 10.1109/TIP.2015.2479456. Epub 2015 Sep 16.
8
Dictionary learning for stereo image representation.立体图像表示的字典学习。
IEEE Trans Image Process. 2011 Apr;20(4):921-34. doi: 10.1109/TIP.2010.2081679. Epub 2010 Sep 30.
9
Efficient Shape Priors for Spline-Based Snakes.基于样条的蛇形曲线的有效形状先验。
IEEE Trans Image Process. 2015 Nov;24(11):3915-26. doi: 10.1109/TIP.2015.2457335.
10
Towards Robust and Accurate Multi-View and Partially-Occluded Face Alignment.面向鲁棒准确的多视角和部分遮挡人脸对齐。
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):987-1001. doi: 10.1109/TPAMI.2017.2697958. Epub 2017 Apr 25.