MeshFlow: Minimum Latency Online Video Stabilization
- Shuaicheng Liu ,
- Ping Tan ,
- Lu Yuan ,
- Jian Sun ,
- Bing Zeng
2016 European Conference on Computer Vision |
Published by Springer, Cham
Many existing video stabilization methods often stabilize videos off-line, i.e. as a postprocessing tool of pre-recorded videos. Some methods can stabilize videos online, but either require additional hardware sensors (e.g., gyroscope) or adopt a single parametric motion model (e.g., affine, homography) which is problematic to represent spatially-variant motions. In this paper, we propose a technique for online video stabilization with only one frame latency using a novel MeshFlow motion model. The MeshFlow is a spatial smooth sparse motion field with motion vectors only at the mesh vertexes. In particular, the motion vectors on the matched feature points are transferred to their corresponding nearby mesh vertexes. The MeshFlow is produced by assigning each vertex an unique motion vector via two median filters. The path smoothing is conducted on the vertex profiles, which are motion vectors collected at the same vertex location in the MeshFlow over time. The profiles are smoothed adaptively by a novel smoothing technique, namely the Predicted Adaptive Path Smoothing (PAPS), which only uses motions from the past. In this way, the proposed method not only handles spatially-variant motions but also works online in real time, offering potential for a variety of intelligent applications (e.g., security systems, robotics, UAVs). The quantitative and qualitative evaluations show that our method can produce comparable results with the state-of-the-art off-line methods.