Automatically Converting Photographic Series into Video
- Xian-Sheng Hua ,
- Lie Lu ,
- Hong-Jiang Zhang
Published by Association for Computing Machinery, Inc.
In this paper, we proposed a novel way to browse a series of photographs, which can be regarded as a system exploring the new medium between photograph and video. The scheme exploits the rich content embedded in a single photograph and photographic series. Based on studying the process of a viewer’s attention variation on objects or regions of an image, a photograph can be converted into a motion clip. A system named hoto2Video was developed to automatically convert a photographic series into a video by simulating camera motions, set to incidental music of the user’s choice. For a selected photographic series, an appropriate set of key-frames are determined for each photograph based on sophisticated content analytical results. Then camera motion pattern (both the key-frame sequencing scheme and trajectory/ speed control strategy) is selected for each photograph to generate a corresponding motion photograph clip. And last, the final output video is rendered by connecting a series of motion photograph clips with specific transitions based on the content of the images on either side, as well as each motion photograph clip is aligned with the selected incidental music based on music content analysis.
Copyright © 2004 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or permissions@acm.org. The definitive version of this paper can be found at ACM's Digital Library -http://www.acm.org/dl/.