Suppr超能文献

高维神经影像中具有结构化稀疏性的回归问题的 Nesterov 平滑的延续

Continuation of Nesterov's Smoothing for Regression With Structured Sparsity in High-Dimensional Neuroimaging.

出版信息

IEEE Trans Med Imaging. 2018 Nov;37(11):2403-2413. doi: 10.1109/TMI.2018.2829802. Epub 2018 Apr 24.

Abstract

Predictive models can be used on high-dimensional brain images to decode cognitive states or diagnosis/prognosis of a clinical condition/evolution. Spatial regularization through structured sparsity offers new perspectives in this context and reduces the risk of overfitting the model while providing interpretable neuroimaging signatures by forcing the solution to adhere to domain-specific constraints. Total variation (TV) is a promising candidate for structured penalization: it enforces spatial smoothness of the solution while segmenting predictive regions from the background. We consider the problem of minimizing the sum of a smooth convex loss, a non-smooth convex penalty (whose proximal operator is known) and a wide range of possible complex, non-smooth convex structured penalties such as TV or overlapping group Lasso. Existing solvers are either limited in the functions they can minimize or in their practical capacity to scale to high-dimensional imaging data. Nesterov's smoothing technique can be used to minimize a large number of non-smooth convex structured penalties. However, reasonable precision requires a small smoothing parameter, which slows down the convergence speed to unacceptable levels. To benefit from the versatility of Nesterov's smoothing technique, we propose a first order continuation algorithm, CONESTA, which automatically generates a sequence of decreasing smoothing parameters. The generated sequence maintains the optimal convergence speed toward any globally desired precision. Our main contributions are: gap to probe the current distance to the global optimum in order to adapt the smoothing parameter and the To propose an expression of the duality convergence speed. This expression is applicable to many penalties and can be used with other solvers than CONESTA. We also propose an expression for the particular smoothing parameter that minimizes the number of iterations required to reach a given precision. Furthermore, we provide a convergence proof and its rate, which is an improvement over classical proximal gradient smoothing methods. We demonstrate on both simulated and high-dimensional structural neuroimaging data that CONESTA significantly outperforms many state-of-the-art solvers in regard to convergence speed and precision.

摘要

预测模型可以用于高维脑图像,以解码认知状态或临床情况/演变的诊断/预后。通过结构稀疏化进行空间正则化为这种情况提供了新的视角,并通过迫使解决方案遵守特定于领域的约束来降低模型过度拟合的风险,同时提供可解释的神经影像学特征。总变差(TV)是一种有前途的结构惩罚候选者:它强制解决方案的空间平滑性,同时将预测区域与背景分开。我们考虑最小化平滑凸损失、非平滑凸惩罚(其近端算子是已知的)和广泛的可能复杂、非平滑凸结构惩罚(如 TV 或重叠组 Lasso)之和的问题。现有的求解器要么在它们可以最小化的函数方面受到限制,要么在它们实际扩展到高维成像数据的能力方面受到限制。Nesterov 的平滑技术可用于最小化大量非平滑凸结构惩罚。然而,合理的精度需要小的平滑参数,这会将收敛速度减慢到无法接受的水平。为了从 Nesterov 平滑技术的多功能性中受益,我们提出了一种一阶连续算法 CONESTA,它自动生成一系列递减的平滑参数。生成的序列保持了朝着任何全局期望精度的最佳收敛速度。我们的主要贡献是:

  1. 差距探测当前距离全局最优的距离,以自适应平滑参数和对偶收敛速度。

  2. 提出了一种对偶收敛速度的表达式。该表达式适用于许多惩罚,并且可以与 CONESTA 以外的其他求解器一起使用。

  3. 还提出了一种最小化达到给定精度所需迭代次数的特定平滑参数的表达式。

  4. 此外,我们提供了一个收敛证明及其速率,这比经典的近端梯度平滑方法有所改进。

  5. 我们在模拟和高维结构神经影像学数据上证明,CONESTA 在收敛速度和精度方面明显优于许多最先进的求解器。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验