IEEE J Biomed Health Inform. 2013 Jul;17(4):870-80. doi: 10.1109/JBHI.2013.2263227.
In this paper, we present a new framework for multimodal volume visualization that combines several information-theoretic strategies to define both colors and opacities of the multimodal transfer function. To the best of our knowledge, this is the first fully automatic scheme to visualize multimodal data. To define the fused color, we set an information channel between two registered input datasets, and afterward, we compute the informativeness associated with the respective intensity bins. This informativeness is used to weight the color contribution from both initial 1-D transfer functions. To obtain the opacity, we apply an optimization process that minimizes the informational divergence between the visibility distribution captured by a set of viewpoints and a target distribution proposed by the user. This distribution is defined either from the dataset features, from manually set importances, or from both. Other problems related to the multimodal visualization, such as the computation of the fused gradient and the histogram binning, have also been solved using new information-theoretic strategies. The quality and performance of our approach are evaluated on different datasets.
在本文中,我们提出了一种新的多模态体绘制框架,该框架结合了几种信息论策略来定义多模态传递函数的颜色和不透明度。据我们所知,这是第一个用于可视化多模态数据的全自动方案。为了定义融合颜色,我们在两个已注册的输入数据集之间设置一个信息通道,然后计算与各自强度区间相关的信息量。该信息量用于加权来自两个初始 1-D 传递函数的颜色贡献。为了获得不透明度,我们应用了一种优化过程,该过程最小化由一组视点捕获的可见性分布与用户提出的目标分布之间的信息散度。该分布是根据数据集特征、手动设置的重要性或两者兼而定义的。还使用新的信息论策略解决了与多模态可视化相关的其他问题,例如融合梯度的计算和直方图分箱。我们的方法的质量和性能在不同的数据集上进行了评估。