書誌事項

タイトル別名
  • interactive synchronization of multiple videos

抄録

<jats:p> Aligning video is a fundamental task in computer graphics and vision, required for a wide range of applications. We present an <jats:italic>interactive</jats:italic> method for computing optimal nonlinear temporal video alignments of an arbitrary number of videos. We first derive a robust approximation of alignment quality between pairs of clips, computed as a weighted histogram of feature matches. We then find optimal temporal mappings (constituting frame correspondences) using a graph-based approach that allows for very efficient evaluation with artist constraints. This enables an enhancement to the "snapping" interface in video editing tools, where videos in a time-line are now able snap to one another when dragged by an artist based on their <jats:italic>content</jats:italic> , rather than simply start-and-end times. The pairwise snapping is then generalized to multiple clips, achieving a globally optimal temporal synchronization that automatically arranges a series of clips filmed at different times into a single consistent time frame. When followed by a simple spatial registration, we achieve high quality spatiotemporal video alignments at a fraction of the computational complexity compared to previous methods. Assisted temporal alignment is a degree of freedom that has been largely unexplored, but is an important task in video editing. Our approach is simple to implement, highly efficient, and very robust to differences in video content, allowing for <jats:italic>interactive</jats:italic> exploration of the temporal alignment space for multiple real world HD videos. </jats:p>

収録刊行物

被引用文献 (1)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ