Now we are developing a wearable recording system which can capture images from user’s view point in user’s everyday life automatically. The user refers to the images which is acquired using this system later, and these images support a user’s activity, such as human memory and human thoughts. Amount images which are acquired becomes so
huge that it is difficult for the user to refer them. So the mechanism for viewing effectively is very important, such as summary. In this research, we describe the concept for summary mechanism of everyday life images.
We propose a method of automatically generating a snapshot sequence, which describes usual events, from head-mounted video camera. And we discuss a relationship between the snapshot sequence and behavior. We extract a snapshot from its video frames when the head hardly moved. This condition appears when the subject kept observation. As the eye (head) motion relates to his behavior, the snapshot sequence relates to his behavior, too. At its condition, neighbor frames are highly similar. So, we judge the head motion by similarity between neighbor frames. We use similarity estimation method by local grayvalue invariants.