Nonlinear Decomposable Generative Models for Dynamic Shape and Dynamic Appearance
May 1, 2006
Ahmed Elgammal, Rutgers University
Our objective is to learn representations for the shape and the appearance of moving (dynamic) objects that support tasks such as synthesis, pose recovery, reconstruction, and tracking. In this talk we introduce a framework for learning generative models for dynamic appearance. We study various approaches for embedding global deformation manifold that preserves their geometric structure. Given such embedding, nonlinear mapping(s) is learned from such embedded space into the visual input space with a closed-form solution for the inverse mapping which facilitates recovery of the intrinsic body configuration and therefore pose recovery. We also address the question of separating style and content on manifolds representing dynamic objects. We learn decomposable generative models that explicitly decompose the intrinsic body configuration (content) as a function of time from the appearance (style) of the person performing the action as time-invariant parameter. We show results on gait data as well as facial expression data.
Sponsored by the Multimedia Vision and Visualization Group.