Skip to content

Instantly share code, notes, and snippets.

@myungsub
Last active December 5, 2020 11:47
Show Gist options
  • Star 32 You must be signed in to star a gist
  • Fork 8 You must be signed in to fork a gist
  • Save myungsub/bd2ebdab580719713b0c5cdbdf5cbc0c to your computer and use it in GitHub Desktop.
Save myungsub/bd2ebdab580719713b0c5cdbdf5cbc0c to your computer and use it in GitHub Desktop.

Papers from Super SloMo references

  • Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation [Paper]
    • Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik Learned-Miller, Jan Kautz
    • CVPR 2018 (splotlight)
  • Video frame synthesis using deep voxel flow [Paper] [Code]
    • Z. Liu, R. Yeh, X. Tang, Y. Liu, and A. Agarwala.
    • ICCV 2017
  • Video frame interpolation via adaptive separable convolution. [Paper] [Code]
    • S. Niklaus, L. Mai, and F. Liu.
    • ICCV 2017
  • Video frame interpolation via adaptive convolution [Paper]
    • S. Niklaus, L. Mai, and F. Liu.
    • CVPR 2017
  • Learning image matching by simply watching video. [Paper]
    • G. Long, L. Kneip, J. M. Alvarez, H. Li, X. Zhang, and Q. Yu.
    • ECCV, 2016.
  • Phase-based frame interpolation for video. [Paper]
    • S. Meyer, O. Wang, H. Zimmer, M. Grosse, and A. SorkineHornung.
    • CVPR 2015
  • Moving gradients: a path-based method for plausible image interpolation.
    • D. Mahajan, F.-C. Huang, W. Matusik, R. Ramamoorthi, and P. Belhumeur.
    • ToG 2009

Recent ArXiv papers

  • MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement [Paper] [Project Page]
    • Wenbo Bao, Wei-Sheng Lai, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang
  • Context-aware Synthesis for Video Frame Interpolation [Paper] [Project Page]
    • S. Niklaus and F. Liu, CVPR 2018
  • Deep Video Generation, Prediction and Completion of Human Action Sequences [Paper]
    • Haoye Cai, Chunyan Bai, Yu-Wing Tai, Chi-Keung Tang (HKUST, Tencent)
  • Frame Interpolation with Multi-Scale Deep Loss Functions and Generative Adversarial Networks [Paper]
    • Joost van Amersfoort, Wenzhe Shi, Alejandro Acosta, Francisco Massa, Johannes Totz, Zehan Wang, Jose Caballero (Twitter)
  • Multi-Scale Video Frame-Synthesis Network with Transitive Consistency Loss [Paper]
    • Zhe Hu (Hikvision), Yinglan Ma (Adobe), Lizhuang Ma (East China Normal University)
  • Video Enhancement with Task-Oriented Flow [Paper] [Project Page (+ Vimeo-90k Dataset)] [Code]
    • Tianfan Xue (Google), Baian Chen (MIT), Jiajun Wu (MIT), Donglai Wei (Harvard), William T. Freeman (MIT)
  • A Temporally-Aware Interpolation Network for Video Frame Inpainting [Paper]
    • Ximeng Sun, Ryan Szeto, and Jason J. Corso (U. Michigan Ann Arbor)
  • Long-Term Video Interpolation with Bidirectional Predictive Network [Paper]
    • Xiongtao Chen,Wenmin Wang,Jinzhuo Wang,Weimian Li,Baoyang Chen (Peking Univ.)

Useful materials for implementations

[15] Z. Liu, R. Yeh, X. Tang, Y. Liu, and A. Agarwala. Video frame synthesis using deep voxel flow. In ICCV, 2017 [16] G. Long, L. Kneip, J. M. Alvarez, H. Li, X. Zhang, and Q. Yu. Learning image matching by simply watching video. In ECCV, 2016. [17] D. Mahajan, F.-C. Huang, W. Matusik, R. Ramamoorthi, and P. Belhumeur. Moving gradients: a path-based method for plausible image interpolation. ACM Transactions on Graphics (TOG), 28(3):42, 2009. [18] S. Meyer, O. Wang, H. Zimmer, M. Grosse, and A. SorkineHornung. Phase-based frame interpolation for video. In CVPR, 2015. [19] S. Niklaus, L. Mai, and F. Liu. Video frame interpolation via adaptive convolution. In CVPR, 2017. [20] S. Niklaus, L. Mai, and F. Liu. Video frame interpolation via adaptive separable convolution. In ICCV, 2017. [21] V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory. In ICLR, workshop, 2016. [22] A. Ranjan and M. J. Black. Optical flow estimation using a spatial pyramid network. In CVPR, 2017. [23] J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid. EpicFlow: Edge-Preserving Interpolation of Correspondences for Optical Flow. In CVPR, 2015. [24] C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott. A perceptually motivated online benchmark for image matting. In CVPR, 2009. [25] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. [26] S. Roth and M. J. Black. On the spatial statistics of optical flow. IJCV, 74(1):33–50, 2007. [27] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. [28] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human action classes from videos in the wild. CRCVTR-12-01, 2012. [29] S. Su, M. Delbracio, J. Wang, G. Sapiro, W. Heidrich, and O. Wang. Deep video deblurring. In CVPR, 2017. [30] D. Sun, S. Roth, J. P. Lewis, and M. J. Black. Learning optical flow. In ECCV, 2008. [31] D. Sun, X. Yang, M.-Y. Liu, and J. Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. arXiv preprint arXiv:1709.02371, 2017. [32] R. Szeliski. Prediction error as a quality metric for motion and stereo. In ICCV, 1999. [33] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. Deepflow: Large displacement optical flow with deep matching. In ICCV, 2013. [34] J. Wulff, L. Sevilla-Lara, and M. J. Black. Optical flow in mostly rigid scenes. In CVPR, 2017. [35] J. Xu, R. Ranftl, and V. Koltun. Accurate optical flow via direct cost volume processing. In CVPR, 2017. [36] L. Xu, J. Jia, and Y. Matsushita. Motion detail preserving optical flow estimation. IEEE TPAMI, 34(9):1744–1757, 2012. [37] J. J. Yu, A. W. Harley, and K. G. Derpanis. Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness. In ECCV, workshop, 2016. [38] F. Zhang, S. Xu, and X. Zhang. High accuracy correspondence field

@kdplus
Copy link

kdplus commented May 21, 2018

sigh.. slomo have still not published their code yet..

@myungsub
Copy link
Author

Still haven't.. I guess the current easiest to use is still SepConv

@diarmaidocualain
Copy link

This might help: Nvidia NGX released the Slo Mo code in their 2019 NGX toollbox https://developer.nvidia.com/rtx/ngx
avinashpaliwal made a port on PyTorch:
https://github.com/avinashpaliwal/Super-SloMo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment