Texture Aware Deep Frame Interpolation

Duolikun Danier

University of Bristol

Contribution

Temporal interpolation can be a powerful tool for a range of video processing operations. Existing methods do not discriminate between video textures and generally invoke a single general model capable of interpolating a wide range of video content. However, our past work on video texture analysis and synthesis has shown that different textures (static, dynamic-continuous and dynamic-discrete) exhibit vastly different motion characteristics. In this work, we study the impact of video textures on video frame interpolation, and propose a novel framework where, given an interpolation algorithm, separate models are trained on different textures. Our study shows that video texture has significant impact on the performance of frame interpolation models and it is beneficial to have separate models specifically adapted to these texture classes, instead of training a single model that tries to learn generic motion. Our results demonstrate that models fine-tuned using our framework achieve, on average, a 0.3dB gain in PSNR on the test set used.

Results

Interpolation results of our texture-aware fine-tuned AdaCoF and the original AdaCoF on sample sequences from static (top row), dynamic discrete (middle row) and dynamic continuous (bottom row) textures. Sequences are "PaintingTilting1", "RiceField" and "ShinnyBlueWaterdownsampled" from the HomTex dataset.

Citation

@article{danier2021texture,
  title={Texture-aware Video Frame Interpolation},
  author={Danier, Duolikun and Bull, David},
  journal={arXiv preprint arXiv:2102.13520},
  year={2021}
}[paper]