Jodoin Pierre-Marc, Mignotte Max, Rosenberger Christophe
Département d'informatique, Université de Sherbrooke, Sherbrooke QC J1K 2R1, Canada.
IEEE Trans Image Process. 2007 Oct;16(10):2535-50. doi: 10.1109/tip.2007.903841.
In this paper, we put forward a novel fusion framework that mixes together label fields instead of observation data as is usually the case. Our framework takes as input two label fields: a quickly estimated and to-be-refined segmentation map and a spatial region map that exhibits the shape of the main objects of the scene. These two label fields are fused together with a global energy function that is minimized with a deterministic iterative conditional mode algorithm. As explained in the paper, the energy function may implement a pure fusion strategy or a fusion-reaction function. In the latter case, a data-related term is used to make the optimization problem well posed. We believe that the conceptual simplicity, the small number of parameters, the use of a simple and fast deterministic optimizer that admits a natural implementation on a parallel architecture are among the main advantages of our approach. Our fusion framework is adapted to various computer vision applications among which are motion segmentation, motion estimation and occlusion detection.