Animating Arbitrary Objects via Deep Motion Transfer (CVPR 2019 Oral)

his paper introduces a novel deep learning framework for image animation. Given an input image with a target object and a driving video sequence depicting a moving object, our framework generates a video in which the target object is animated according to the driving sequence. This is achieved through a deep architecture that decouples appearance and motion information. Our framework consists of three main modules:(i) a Keypoint Detector unsupervisely trained to extract object keypoints,(ii) a Dense Motion prediction network for generating dense heatmaps from sparse keypoints, in order to better encode motion information and (iii) a Motion Transfer Network, which uses the motion heatmaps and appearance information extracted from the input image to synthesize the output frames. We demonstrate the effectiveness of our method on several benchmark datasets, spanning a wide variety of object appearances, and show that our approach outperforms state-of-the-art image animation and video generation methods.

Animating Arbitrary Objects via Deep Motion Transfer, Aliaksandr Siarohin, Stéphane Lathuilière, Segey Tulyakov, Elisa Ricci, Nicu Sebe, CVPR 2019 (Oral)

Available on arxiv  and github.

NEMO Face Dataset

Screenshot

Taichi Dataset

Screenshot

BAIR Robot Dataset

Screenshot

MGIF Dataset

Screenshot