Reference-Based Video Colorization with Spatiotemporal CorrespondenceView Publication
We propose a novel reference-based video colorization framework with spatiotemporal correspondence. Reference-based methods colorize grayscale frames referencing a user input color frame. Existing methods suffer from the color leakage between objects and the emergence of average colors, derived from non-local semantic correspondence in space. To address this issue, we warp colors only from the regions on the reference frame restricted by correspondence in time. We propagate masks as temporal correspondences, using two complementary tracking approaches: off-the-shelf instance tracking for high performance segmentation, and newly proposed dense tracking to track various types of objects. By restricting temporally-related regions for referencing colors, our approach propagates faithful colors throughout the video. Experiments demonstrate that our method outperforms state-of-the-art methods quantitatively and qualitatively.
Related PublicationsView All
DetOFA: Efficient Training of Once-for-All Networks for Object Detection Using Path Filter
Yuiko Sakuma, Masato Ishii, Takuya NarihiraWe address the challenge of training a large supernet for the object detection task, using a relatively small […]
Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models
Naoki Matsunaga, Masato Ishii, Akio Hayakawa, Kenji Suzuki, Takuya NarihiraOur goal is to develop fine-grained real-image editing methods suitable for real-world applications. In this p […]