These samples from Nvidia are faked slow motion videos. What that means is they were created from regular FPS speed camera footage.
The technology works similar to how a modern TV ups the framerate by interpolating (guessing) frames in between the ones that exist. Thereby creating the illusion of slow motion or high speed footage.
It’s not perfect, especially the version that comes with a modern TV. But this is still very impressive. Surely there will be uses. Yet one has to ask whether it’s necessary with high FPS cameras going more mainstream everyday, your smartphone can probably do 120+ fps video.
Doing high res slow motion videos of course is more difficult due to requiring more processing power. So while 4K slow motion is still incredibly expensive this may be turn out to be useful for various purposes. For example when replaying sports.
Using NVIDIA Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework the team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.
“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers said. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.” -Nvidia
“Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. We use 1,132 video clips with 240-fps, containing 300K individual video frames, to train our network. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.” –Cornell University
Recent Comments
(Latest Update 08-18)[…] Parker Warner Wright Releases a New Creepy Puzzle Video 2019 (Latest Update 08-18) […