These include eight PhD theses describing research which leveraged SenseCam.
Note that work presented at the first three SenseCam Conferences, SenseCam 2012, SenseCam 2010 and SenseCam 2009, is largely not included because these were not archival events. Papers included at SenseCam 2013 were, however, published by the ACM and are listed below.
SenseCam is a wearable digital camera that is designed to take photographs passively, without user intervention, while it is being worn. Unlike a regular digital camera or a cameraphone, SenseCam does not have a viewfinder or a display that can be used to frame photos. Instead, it is fitted with a wide-angle (fish-eye) lens that maximizes its field-of-view. This ensures that nearly everything in the wearer’s view is captured by the camera, which is important because a regular wearable camera would likely produce many uninteresting images.
SenseCam also contains a number of different electronic sensors. These include light-intensity and light-color sensors, a passive infrared (body heat) detector, a temperature sensor, and a multiple-axis accelerometer. These sensors are monitored by the camera’s microprocessor, and certain changes in sensor readings can be used to automatically trigger a photograph to be taken.
For example, a significant change in light level, or the detection of body heat in front of the camera can cause the camera to take a picture. Alternatively, the user may elect to set SenseCam to operate on a timer, for example taking a picture every 30 seconds. We have also experimented with the incorporation of audio level detection, audio recording and GPS location sensing into SenseCam although these do not feature in the current hardware.
In our current design (v2.3), users typically wear the camera on a cord around their neck, although it would also be possible to clip it to pockets or belts, or to attach it directly to clothing. There are several advantages of using a neck-cord to wear the camera. First, it is reasonably stable when being worn, as it tends not to move around from left-to-right when the wearer is walking or sitting. Second, it is relatively comfortable to wear and easy to put on and take off. Third, when worn around the neck, SenseCam is reasonably close to the wearer’s eyeline and generates images taken from the wearer’s point of view – i.e., they get a ‘first person’ view. Informal observations suggest that this results in images that are more compelling when subsequently replayed.
SenseCam takes pictures at VGA resolution (640×480 pixels) and stores them as compressed .jpg files on internal flash memory. We currently fit 1Gb of flash memory, which can typically store over 30,000 images. Most users seem happy with the relatively low-resolution images, suggesting that the time-lapse, first-person-viewpoint sequences represent a useful media type that exists somewhere between still images and video. It also points to the fact that these are used as memory supports rather than rich media. Along with the images, SenseCam also stores a log file, which records other sensor data along with their timestamps. Additional user data, such as time-stamped GPS traces, may be used in conjunction with the SenseCam data via time-correlation.
The data recorded by the SenseCam can be downloaded onto a desktop or laptop computer, typically at the end of a day or week. Microsoft Research developed a simple viewer application that can be used to transfer the images in this way and then display them. The basis of the viewer, which is designed to be very straightforward to use, is a window in which images are displayed and a simple VCR-type control which allows an image sequence to be played slowly (around 2 images/second), quickly (around 10 images/second), re-wound and paused.
The fast-play option creates a kind of ‘flip-book’ movie effect – the entire event represented by the images is replayed as a time-compressed movie. Such rapid serial visual presentation (RSVP) techniques are well-studied in psychological literature1 and are particularly suited to SenseCam images. It is possible to delete individual images from the sequence if they are badly framed or of poor quality. An additional option is provided to correct for the ‘fish-eye’ lens effect using an algorithm, which applies an inverse model of the distortion.
It is also possible to import SenseCam image sequences into a more sophisticated application. MyLifeBits will allow the large number of images generated daily to be easily searched and accessed. Dublin City University have developed a sophisticated SenseCam image browser which assists in splitting sequences of images into different events by automatically analysing the images and sensor data generated by SenseCam.
Early on in the development of SenseCam, we became aware of the work of the Memory Clinic and Memory Aids Clinic at Addenbrooke’s Hospital, Cambridge, UK. This is a centre of excellence in the UK for diagnosing various conditions that affect memory, and for working with patients to try and mitigate their symptoms. While there are established techniques to help people remember to do things (i.e. supplement their prospective memory), there are very few aids that complement autobiographical memory, i.e. support the remembrance of things done or experienced. The Memory Clinic was excited by the potential of SenseCam to help in this regard.
In around 2005 we started a trial with a 63-year-old patient from the clinic with amnesia resulting from a brain infection. The patient, Mrs. B, was given a SenseCam and asked to wear it whenever she anticipated a ‘significant event’ – the sort of event that she would like to remember (i.e. not just something routine or mundane).
After wearing SenseCam for the duration of such an event, Mrs. B would spend around one hour reviewing the images every two days, for a two-week period.
Without any aids to recall, Mrs. B typically completely forgets everything about an event after five days or less. However, during the course of this period of assisted recall using SenseCam, Mrs. B’s memory for the event steadily increased, and after two weeks she could recall around 80 percent of the event in question. What is perhaps more remarkable is that following the two-week period of aided recall, Mrs. B appears to have a lasting ability to recall the event even without reviewing the images.
The results of that initial trial with SenseCam are shown here:
Following the success of the first trial and the excitement it generated in both the research and clinical rehabilitation communities, Microsoft Research made SenseCam devices available to a large number of researchers and also initiated additional trials related to SenseCam’s use as a memory aid. Using SenseCam seems to be a very positive experience for most of the patients involved. Many have reported enjoying using it and reviewing images of their experiences, explaining that it makes them feel much more confident and relaxed. This is in stark contrast to the use of a written diary, which patients typically report has the opposite effect. Carers have also reported that they find SenseCam very beneficial. Here are some of the things that patients and their carers have said about SenseCam:
Microsoft has provided over $0.5M funding including SenseCam devices, software and support to facilitate collaborative research projects with academic and clinical memory experts around the world. Some of these projects, which broadly aim to address specific research questions and further our understanding of how SenseCam appears to give such dramatic results in improving memory recall, are listed below:
In addition to the use of SenseCam as an aid for people with memory loss, the device has a number of other potential applications. In 2005, Microsoft provided some of the first SenseCams to a number of academic collaborators interested in the general area of ‘digital memories’, i.e. life-recording or life-logging. These projects applied SenseCam in a variety of ways. For example CLARITY, the Centre for Sensor Web Technologies at Dublin City University, Ireland, is working on systems that will automatically generate ‘landmark images’ through analysis of the large number of images and other logged data recorded by SenseCam. In this way a personalized memory experience of a visit to a museum, national monument, etc. can be automatically generated, based on data collected by SenseCams worn during the visit. The CLARITY Centre has also done a huge range of additional research related to SenseCam.
We are also working with Dr Charlie Foster and his colleagues from the Health Promotion Research Group at Oxford University, UK. This work, funded in part by the British Heart Foundation, looks at the relationship between the environment and physical activity – for example how effective the provision of cycle lanes is in encouraging people to leave their cars at home. SenseCam can be useful as a means to measure various aspects of the of the environment and the amount of exercise people take. The group is also using SenseCam as a tool to record food choices and eating habits.
We worked with the Universities of Nottingham and Bath, the BBC, BT and two small companies, Blast Theory and ScienceScope as part of a project called Participate. The purpose of Participate is to design, develop and test the utility of novel, pervasive, lightweight and wearable technologies that support mass participation in science, education, art and community life. SenseCam has been used by a number of school children as part of this project. In a separate piece of work, SenseCam has been used in the classroom to enable teachers to create a log of their day, supporting various aspects of reflective practice and thereby enabling users of the device to analyse their day afterwards. SenseCam has also been used in an office environment to support studies of how office workers spend their day, and in particular how they manage to work simultaneously on different tasks.
Collaborations with a number of other researchers around the world to further explore yet more potential usages for SenseCam include:
How many images does the SenseCam take?
SenseCam typically takes a picture every 30 seconds, although this is user-configurable. The maximum rate of capture is one image every 5 seconds. With a 1Gb storage card fitted inside the device, it is capable of storing over 30,000 images which in practical terms is a week or two’s worth of pictures. When the internal storage is full, the images must be downloaded to a PC.
How long does the battery last?
The rechargeable battery in the SenseCam will run continuously for around 24 hours when it’s capturing an image every 30 seconds or so. It takes around 3 hours to recharge using a USB connection to a PC or a mains adapter.
How do you use the sensor data?
Data is from the various sensors in the SenseCam is collected continuously and recorded on the internal storage card. SenseCam also uses information from the sensors to trigger additional image capture, beyond the ‘image every 30 seconds’ which is captured in any case. For example, if the SenseCam has been stationary for some time as a result of being put down somewhere for example, the PIR sensor will be used to detect people coming into view and this will trigger additional photos to be taken. In some applications, for example our work with patients who have memory loss conditions, simple timed-triggering may well be sufficient.
The Sensor data may also be used after the event to facilitate various types of automatic analysis of a sequence of images. A good example of this is automatic landmark generation research.
Who invented SenseCam? Who worked on the project?
Whilst working at Microsoft Research, Lyndsay Williams initiated the first prototype of SenseCam in 2003, motivated by the idea of a ‘black box’ accident recorder for people. Since then a large number of people at Microsoft Research have evolved the project very significantly. Steve Hodges designed the SenseCam device and led an initiative to disseminate these around the world for research into a number of different aspects of memory, activity and nutrition monitoring, market research, and other topics. This device has also been commercialised by Vicon as the Revue and by the OMG group as the Autographer. Others involved in various aspects of hardware and software development, evaluation and experimentation include: Emma Berry, Georgina Browne, Alex Butler, Rowanne Fleck, Andrew Fogg, Richard Harper, Steve Hodges, Shahram Izadi, Matt Lee, Mike Massimi, Narinder Kapur, Dave Randall, Alban Rrustemi, James Scott, Abigail Sellen, Gavin Smyth, James Srinivasan, Trevor Taylor and Ken Woodberry. SenseCam and all associated intellectual property is owned by Microsoft Research.