SenseCam

SenseCam

Established: February 25, 2004

SenseCam is a wearable camera that takes photos automatically. Originally conceived as a personal ‘Black Box’ accident recorder, it soon became evident that looking through images previously recorded tends to elicit quite vivid remembering of the original event. This exciting effect has formed the basis of a great deal of research around the world using SenseCam and the device is now available to buy as the Vicon Revue.

There is lots of information about SenseCam on this website, but highlights of the project include:

Introduction to SenseCam

SenseCam is a wearable digital camera that is designed to take photographs passively, without user intervention, while it is being worn. Unlike a regular digital camera or a cameraphone, SenseCam does not have a viewfinder or a display that can be used to frame photos. Instead, it is fitted with a wide-angle (fish-eye) lens that maximizes its field-of-view. This ensures that nearly everything in the wearer’s view is captured by the camera, which is important because a regular wearable camera would likely produce many uninteresting images.

SenseCam also contains a number of different electronic sensors. These include light-intensity and light-color sensors, a passive infrared (body heat) detector, a temperature sensor, and a multiple-axis accelerometer. These sensors are monitored by the camera’s microprocessor, and certain changes in sensor readings can be used to automatically trigger a photograph to be taken.

For example, a significant change in light level, or the detection of body heat in front of the camera can cause the camera to take a picture. Alternatively, the user may elect to set SenseCam to operate on a timer, for example taking a picture every 30 seconds. We have also experimented with the incorporation of audio level detection, audio recording and GPS location sensing into SenseCam although these do not feature in the current hardware.

In our current design (v2.3), users typically wear the camera on a cord around their neck, although it would also be possible to clip it to pockets or belts, or to attach it directly to clothing. There are several advantages of using a neck-cord to wear the camera. First, it is reasonably stable when being worn, as it tends not to move around from left-to-right when the wearer is walking or sitting. Second, it is relatively comfortable to wear and easy to put on and take off. Third, when worn around the neck, SenseCam is reasonably close to the wearer’s eyeline and generates images taken from the wearer’s point of view – i.e., they get a ‘first person’ view. Informal observations suggest that this results in images that are more compelling when subsequently replayed.

SenseCam takes pictures at VGA resolution (640×480 pixels) and stores them as compressed .jpg files on internal flash memory. We currently fit 1Gb of flash memory, which can typically store over 30,000 images. Most users seem happy with the relatively low-resolution images, suggesting that the time-lapse, first-person-viewpoint sequences represent a useful media type that exists somewhere between still images and video. It also points to the fact that these are used as memory supports rather than rich media. Along with the images, SenseCam also stores a log file, which records other sensor data along with their timestamps. Additional user data, such as time-stamped GPS traces, may be used in conjunction with the SenseCam data via time-correlation.

Reviewing and Sharing SenseCam Images

The data recorded by the SenseCam can be downloaded onto a desktop or laptop computer, typically at the end of a day or week. Microsoft Research developed a simple viewer application that can be used to transfer the images in this way and then display them. The basis of the viewer, which is designed to be very straightforward to use, is a window in which images are displayed and a simple VCR-type control which allows an image sequence to be played slowly (around 2 images/second), quickly (around 10 images/second), re-wound and paused.

The fast-play option creates a kind of ‘flip-book’ movie effect – the entire event represented by the images is replayed as a time-compressed movie. Such rapid serial visual presentation (RSVP) techniques are well-studied in psychological literature1 and are particularly suited to SenseCam images. It is possible to delete individual images from the sequence if they are badly framed or of poor quality. An additional option is provided to correct for the ‘fish-eye’ lens effect using an algorithm, which applies an inverse model of the distortion.

It is also possible to import SenseCam image sequences into a more sophisticated application. MyLifeBits will allow the large number of images generated daily to be easily searched and accessed. Dublin City University have developed a sophisticated SenseCam image browser which assists in splitting sequences of images into different events by automatically analysing the images and sensor data generated by SenseCam.

Using SenseCam to Alleviate Memory Loss

Early on in the development of SenseCam, we became aware of the work of the Memory Clinic and Memory Aids Clinic at Addenbrooke’s Hospital, Cambridge, UK. This is a centre of excellence in the UK for diagnosing various conditions that affect memory, and for working with patients to try and mitigate their symptoms. While there are established techniques to help people remember to do things (i.e. supplement their prospective memory), there are very few aids that complement autobiographical memory, i.e. support the remembrance of things done or experienced. The Memory Clinic was excited by the potential of SenseCam to help in this regard.

In around 2005 we started a trial with a 63-year-old patient from the clinic with amnesia resulting from a brain infection. The patient, Mrs. B, was given a SenseCam and asked to wear it whenever she anticipated a ‘significant event’ – the sort of event that she would like to remember (i.e. not just something routine or mundane).

After wearing SenseCam for the duration of such an event, Mrs. B would spend around one hour reviewing the images every two days, for a two-week period.

Without any aids to recall, Mrs. B typically completely forgets everything about an event after five days or less. However, during the course of this period of assisted recall using SenseCam, Mrs. B’s memory for the event steadily increased, and after two weeks she could recall around 80 percent of the event in question. What is perhaps more remarkable is that following the two-week period of aided recall, Mrs. B appears to have a lasting ability to recall the event even without reviewing the images.

The results of that initial trial with SenseCam are shown here:

graph_1

Following the success of the first trial and the excitement it generated in both the research and clinical rehabilitation communities, Microsoft Research made SenseCam devices available to a large number of researchers and also initiated additional trials related to SenseCam’s use as a memory aid. Using SenseCam seems to be a very positive experience for most of the patients involved. Many have reported enjoying using it and reviewing images of their experiences, explaining that it makes them feel much more confident and relaxed. This is in stark contrast to the use of a written diary, which patients typically report has the opposite effect. Carers have also reported that they find SenseCam very beneficial. Here are some of the things that patients and their carers have said about SenseCam:

  • “I am less anxious, because it helps to settle, or verify, what actually happened…”
  • “It has enormous potential as a memory aid and has been a great success for us personally”
  • “Looking at the images is definitely helpful… they cue memories of things I would normally just forget”
  • “SenseCam is a Godsend… everyone should have one!”
  • I am “more relaxed socially and less anxious”
  • “Sharing experiences again is a sheer pleasure”

Microsoft has provided over $0.5M funding including SenseCam devices, software and support to facilitate collaborative research projects with academic and clinical memory experts around the world. Some of these projects, which broadly aim to address specific research questions and further our understanding of how SenseCam appears to give such dramatic results in improving memory recall, are listed below:

  • SenseCam in the study and support of memory in Transient Epileptic Amnesia Professor Adam Zeman, University of Exeter, UK
  • SenseCam-facilitated recollection in patients with dementia Professor Phil Barnard, Medical Research Council, Cambridge, UK, & Dr Linda Clare, University of Bangor, Wales, UK
  • Why and how are SenseCam movies such a powerful aid to memory? Locating the brain basis of memory improvement Professor Roberto Cabeza, Duke University, US, and Professor Martin Conway, University of Leeds, UK
  • Enhancing quality of life in Alzheimer’s Disease with automatic SenseCam records of days in one’s life Professor Ron Baecker, University of Toronto and Professor Yaakov Stern, Columbia Medical School, US
  • Evaluation of SenseCam as a tool for aiding executive self-monitoring and control of emotion and behaviour after brain injury Fergus Gracey, The Oliver Zangwill Centre for Neuropsychological Rehabilitation, Ely, UK
  • Evaluation of SenseCam as a retrospective memory compensation aid following acquired brain injury David Winkelaar, Psychologist, The Halvar Jonson Centre for Brain InjuryPonoka, Alberta, Canada
  • SenseCam as a Tool to Study Memory Processes in Autobiographical MemoryProfessor William F. Brewer, Professor Aaron S. Benjamin, and Jason R. Finley, Department of Psychology, University of Illinois, Urbana-Champaign, US

Other Applications for SenseCam

In addition to the use of SenseCam as an aid for people with memory loss, the device has a number of other potential applications. In 2005, Microsoft provided some of the first SenseCams to a number of academic collaborators interested in the general area of ‘digital memories’, i.e. life-recording or life-logging. These projects applied SenseCam in a variety of ways. For example CLARITY, the Centre for Sensor Web Technologies at Dublin City University, Ireland, is working on systems that will automatically generate ‘landmark images’ through analysis of the large number of images and other logged data recorded by SenseCam. In this way a personalized memory experience of a visit to a museum, national monument, etc. can be automatically generated, based on data collected by SenseCams worn during the visit. The CLARITY Centre has also done a huge range of additional research related to SenseCam.

We are also working with Dr Charlie Foster and his colleagues from the Health Promotion Research Group at Oxford University, UK. This work, funded in part by the British Heart Foundation, looks at the relationship between the environment and physical activity – for example how effective the provision of cycle lanes is in encouraging people to leave their cars at home. SenseCam can be useful as a means to measure various aspects of the of the environment and the amount of exercise people take. The group is also using SenseCam as a tool to record food choices and eating habits.

We worked with the Universities of Nottingham and Bath, the BBC, BT and two small companies, Blast Theory and ScienceScope as part of a project called Participate. The purpose of Participate is to design, develop and test the utility of novel, pervasive, lightweight and wearable technologies that support mass participation in science, education, art and community life. SenseCam has been used by a number of school children as part of this project. In a separate piece of work, SenseCam has been used in the classroom to enable teachers to create a log of their day, supporting various aspects of reflective practice and thereby enabling users of the device to analyse their day afterwards. SenseCam has also been used in an office environment to support studies of how office workers spend their day, and in particular how they manage to work simultaneously on different tasks.

Collaborations with a number of other researchers around the world to further explore yet more potential usages for SenseCam include:

  • As a tool to assess accessibility issues encountered by wheelchair users.
  • To coordinate disaster response by recording visual information encountered by those responding to disasters, people preoccupied with providing hands-on help.
  • As an automatic diary that doesn’t require expensive, intrusive recording equipment or restrict a user’s activities.
  • A non-intrusive market research tool.
  • To monitor physiological data to help patients understand the sequence of events that precedes a period of intense anxiety or anger.
  • To monitor lighting conditions in schools and to learn how they affect students.
  • Capturing personal experiences for sharing with others.”

More Information and Downloads

There is an active SenseCam research community, which meet annually at the SenseCam Symposium. For more information please visit the SenseCam wiki. There is also a Wikipedia page on SenseCam. SenseCam is on display at the London Science Museum in their ‘Who am I?’ gallery. Videos which describe SenseCam are available.

The SenseCam is available to buy as the Vicon Revue. In addition to the software for viewing images that Vicon supply, the original Microsoft Research Image Viewer is available, as is a more advanced viewer from Dublin City University.

Images of the SenseCam device and sample images taken with a SenseCam are available to download. We also have a selection of quotes from some of our collaborators.

For more information about SenseCam research please contact us at sensecam@microsoft.com

Media Coverage

Q&A

How many images does the SenseCam take?

SenseCam typically takes a picture every 30 seconds, although this is user-configurable. The maximum rate of capture is one image every 5 seconds. With a 1Gb storage card fitted inside the device, it is capable of storing over 30,000 images which in practical terms is a week or two’s worth of pictures. When the internal storage is full, the images must be downloaded to a PC.

How long does the battery last?

The rechargeable battery in the SenseCam will run continuously for around 24 hours when it’s capturing an image every 30 seconds or so. It takes around 3 hours to recharge using a USB connection to a PC or a mains adapter.

How do you use the sensor data?

Data is from the various sensors in the SenseCam is collected continuously and recorded on the internal storage card. SenseCam also uses information from the sensors to trigger additional image capture, beyond the ‘image every 30 seconds’ which is captured in any case. For example, if the SenseCam has been stationary for some time as a result of being put down somewhere for example, the PIR sensor will be used to detect people coming into view and this will trigger additional photos to be taken. In some applications, for example our work with patients who have memory loss conditions, simple timed-triggering may well be sufficient.

The Sensor data may also be used after the event to facilitate various types of automatic analysis of a sequence of images. A good example of this is automatic landmark generation research.

Who invented SenseCam? Who worked on the project?

Whilst working at Microsoft Research, Lyndsay Williams initiated the first prototype of SenseCam in 2003, motivated by the idea of a ‘black box’ accident recorder for people. Since then a large number of people at Microsoft Research have evolved the project very significantly. Steve Hodges designed the SenseCam device which has been used around the world for research into a number of different aspects of memory, activity and nutrition monitoring, market research, and other topics. This device has also been commercialised by Vicon as the Revue. Others involved in various aspects of hardware and software development, evaluation and experimentation include: Emma Berry, Georgina Browne, Alex Butler, Rowanne Fleck, Andrew Fogg, Richard Harper, Steve Hodges, Shahram Izadi, Matt Lee, Mike Massimi, Narinder Kapur, Dave Randall, Alban Rrustemi, James Scott, Abigail Sellen, Gavin Smyth, James Srinivasan, Trevor Taylor and Ken Wood. SenseCam and all associated intellectual property is owned by Microsoft Research.

People

Publications

Videos

Publications by Researchers outside of Microsoft

  1. Paul Kelly, Aiden Doherty, Emma Berry, Steve Hodges, Alan M Batterham and Charlie Foster. Can we use digital life-log images to investigate active and sedentary travel behaviour? Results from a pilot study. International Journal of Behavioral Nutrition and Physical Activity. May 2011.
  2. St Jacques, PL; Conway, MA; Lowder, MW; Cabeza, R. Watching my mind unfold versus yours: an fMRI study using a novel camera technology to examine neural differences in self-projection of self versus other perspectives. Journal of cognitive neuroscience 2011;23(6):1275-84.
  3. Aiden R. Doherty, Kieron O’Hara, Kiyoharu Aizawa, Niamh Caprani and Alan F. Smeaton (Eds.), Proceedings of the 2nd Workshop on Information Access to Personal Media Archives. 18 April 2011, Dublin, Ireland, ISBN 1872-327-974.
  4. Loveday, C., Conway, M.A., Using SenseCam with an Amnesic Patient: Accessing Inaccessible Everyday Memories. Memory 2011
  5. Fionnuala C. Murphy, Philip J. Barnard, Kayleigh A. M. Terry, Maria Teresa Carthery-Goulart & Emily A. Holmes, SenseCam, imagery and bias in memory for wellbeing, Memory, 2011.
  6. F. Milton, N. Muhlert, C. R. Butler, A. Smith, A. Benattayallah & A. Z. Zeman, An fMRI study of long-term everyday memory using SenseCam, Memory, 2011.
  7. Kiernan Burke, Sue Franklin & Olive Gowan, Passive imaging technology in aphasia therapy, Memory, 2011.
  8. Jason R. Finley, William F. Brewer & Aaron S. Benjamin, The effects of end-of-day picture review and a sensor-based picture capture procedure on autobiographical memory using SenseCam, Memory, 2011.
  9. Philip J. Barnard, Fionnuala C. Murphy, Maria Teresa Carthery-Goulart, Cristina Ramponi & Linda Clare,
    Exploring the basis and boundary conditions of SenseCam-facilitated recollection, Memory, 2011.
  10. Peggy L. St. Jacques, Martin A. Conway & Roberto Cabeza, Gender differences in autobiographical memory for everyday events: Retrieval elicited by SenseCam images versus verbal cues, Memory, 2011.
  11. Aiden R. Doherty, Chris J. A. Moulin & Alan F. Smeaton, Automatically assisting human memory: A SenseCam browser, Memory, 2011.
  12. Katalin Pauly-Takacs, Chris J. A. Moulin & Edward J. Estlin, SenseCam as a rehabilitation tool in a child with anterograde amnesia, Memory, 2011.
  13. Rob Brindley, Andrew Bateman & Fergus Gracey, Exploration of use of SenseCam to support autobiographical memory retrieval within a cognitive-behavioural therapeutic intervention following acquired brain injury, Memory, 2011.
  14. Byrne, Daragh and Kelliher, Aisling and Jones, Gareth J.F. Life editing: Third-party perspectives on lifelog content. In: The ACM CHI Conference on Human Factors in Computing Systems (CHI 2011), May 2011, Vancouver, Canada.
  15. Qiu, Zhengwei and Doherty, Aiden R. and Gurrin, Cathal and Smeaton, Alan F. Mining user activity as a context source for search and retrieval. In: STAIR’11: International Conference on Semantic Technology and Information Retrieval, 28-29 June 2011, Kuala Lumpur, Malaysia.
  16. Chen, Yi and Jones, Gareth J.F. and Ganguly, Debasis Segmenting and summarizing general events in a long-term lifelog. In: The 2nd Workshop Information Access for Personal Media Archives (IAPMA) at ECIR 2011, 18-21 April 2011, Dublin, Ireland.
  17. Sutton, Jon. Claire’s life, 9:53-10:42. The Psychologist, 01/02/2011.
  18. Rowanne Fleck, Geraldine Fitzpatrick, Reflecting on reflection: framing a design landscape
    OZCHI ’10 Proceedings of the 22nd Conference of the Computer-Human Interaction Special Interest Group of Australia on Computer-Human Interaction.
  19. St. Jacques PL, Conway MA, Lowder M, Cabeza R. Watching my mind unfold vs. yours: An fMRI study using a novel camera technology to examine the neural correlates of self projection of self vs. other. Journal of Cognitive Neuroscience, 23, 1275-1284. doi:10.1162/jocn. 2010. 21518.
  20. Eben Harrell, Remains of the Day. Time Magazine, October 2010.
  21. Jones, Gareth J.F. and Byrne, Daragh and Hughes, Mark and O’Connor, Noel E. and Salway, Andrew. Automated annotation of landmark images using community contributed datasets and web resources. In: The 5th International Conference on Semantic and Digital Media Technologies (SAMT 2010), 1-3 Dec. 2010, Saarbrücken, Germany.
  22. Doherty, Aiden R. and Caprani, Niamh and O Conaire, Ciaran and Kalnikaite, Vaiva and Gurrin, Cathal and O’Connor, Noel and Smeaton, Alan F. Passively recognising human activities through lifelogging. Computers in Human Behavior, 27 (5). pp. 1948-1958. ISSN 0747-5632
  23. Doherty, Aiden R. and Kelly, Philip and Smeaton, Alan F. and O’Flynn, Brendan and Curran, Padraig and O’Mathuna, Cian and O’Connor, Noel E. Effects of environmental colour on mood: a wearable life colour capture device. In: ACM Multimedia 2010, 25-29 October 2010, Florence, Italy. ISBN 978-1-60558-933-6
  24. Doherty, Aiden R. and Qiu, Zhengwei and Foley, Colum and Lee, Hyowon and Gurrin, Cathal and Smeaton, Alan F. Green multimedia: informing people of their carbon footprint through two simple sensors. In: ACM Multimedia 2010, 25-29 October 2010, Florence, Italy. ISBN 978-1-60558-933-6
  25. Caprani N, Gurrin, C and O’Connor N.E. I Like to Log: A Questionnaire Study towards Accessible Lifelogging for Older Users. In ASSETS 2010: The 12th International ACM SIGACCESS Conference on Computers and Accessibility, Orlando, Fl, 25-27 October, 2010.
  26. Chen, Yi and Kelly, Liadh and Jones, Gareth J.F. Supporting episodic memory from personal lifelog archives using SenseCam and contextual cues. In: SenseCam Symposium 2010, 16-17 September 2010, Dublin, Ireland.
  27. Kelly, Philip and Kumar, Anil and Doherty, Aiden R. and Lee, Hyowon and Smeaton, Alan F. and Gurrin, Cathal and O’Connor, Noel E. The colour of life: interacting with SenseCam images on large multi-touch display walls. In: SenseCam 2010 – second annual SenseCam symposium, 16-17 September 2010, Dublin, Ireland. ISBN 1872-327-915
  28. Kelly L and Jones G.J.F. An Exploration of the Utility of GSR in Locating Events from Personal Lifelogs for Reflection. In Proceedings of the 4th Irish Human Computer Interaction Conference (iHCI2010), Dublin, Ireland, 2-3 September 2010.
  29. Chen Y and Jones G.J.F. Augmenting Human Memory using Personal Lifelogs. In Proceedings of the First Augmented Human International Conference (AH’10), Megève, France, April 2010.
  30. Byrne D, Kelly L and Jones G.J.F. Multiple Multimodal Mobile Devices: Lessons Learned from Engineering Lifelog Solutions. In: Handbook of Research on Mobile Software Engineering: Design, Implementation and Emergent Applications, IGI Publishing, 2010.
  31. Kelly, P and Foster, C. A new technology for measuring our journeys: Results from a pilot study. Early Career Investigator Prize. 2010 Annual Conference of the International Society of Behavioral Nutrition and Physical Acitivity, Minneapolis, 9-12 June, 2010.
  32. Marcu, G. and Hayes, G. R. Use of a Wearable Recording Device in Therapeutic Interventions for Children with Autism. In Proc Workshop on Interactive Systems in Healthcare (WISH). Atlanta, GA. Apr 10-11, 2010. pp 113-116. Dealer Analysis Group, ISBN 098262848X.
  33. Caprani, N., Doherty, A., Lee, H., Smeaton, A.F., O’Connor, N. and Gurrin, C. Designing a Touch-Screen SenseCam Browser to Support an Aging Population. CHI 2010 – ACM Conference on Human Factors in Computing Systems (Work-in-Progress), Atlanta, GA, 10-15 April 2010.
  34. Doherty, A.R. and Smeaton, A.F. Automatically augmenting lifelog events using pervasively generated content from millions of people. Sensors, 10 (3). pp. 1423-1446, 2010.
  35. Kelly L and Jones G.J.F. Examining the Utility of Affective Response in Search of Personal Lifelogs. 5th Workshop on Emotion in HCI, British HCI Conference 2009, Cambridge, U.K., 1 September 2009.
  36. Chen Y and Jones G. An Event-Based Interface to Support Personal Lifelog Search. HCI International 2009 – 13th International Conference on Human-Computer Interaction, San Diego, CA, 19-24 July 2009.
  37. Fleck, R Supporting Reflection on Experience with SenseCam, Workshop on Designing for Reflection on Experience, CHI2009, Boston.
  38. Aiden R. Doherty and Cathal Gurrin and Alan F. Smeaton. Utilising contextual memory retrieval cues and the ubiquity of the cell phone to review lifelogged physiological activities. In: EIMM 2009 – 1st ACM International Workshop on Events in Multimedia, pp49-46, 23 October 2009, Beijing, China.
  39. Daragh Byrne, Aiden R. Doherty, Cees G.M. Snoek, Gareth J.F. Jones, Alan F. Smeaton. Everyday Concept Detection in Visual Lifelogs: Validation, Relationships and Trends, Multimedia Tools and Applications, ISSN 1573-7721 (In Press), 2009.
  40. Kelly, L., Byrne, D. and Jones, G.J.F. The role of places and spaces in lifelog retrieval. PIM 2009 – Personal Information Management, Vancouver, Canada, 7-8 November, 2009.
  41. Byrne, D. and Jones, G.J.F. Exploring narrative presentation for large multimodal lifelog collections through card sorting. ICIDS 2009 – Second International Conference on Interactive Digital Storytelling, Guimarães, Portugal, 9-11 December 2009.
  42. Fleck, R. & Fitzpatrick, G., Teachers’ and Tutors’ Social Reflection around SenseCam Images. International Journal of Human Computer Studies 67, pp1027-1036, 2009.
  43. Doherty, Aiden R. and Gurrin, Cathal and Smeaton, Alan F. (2009) An investigation into event decay from large personal media archives. In: EIMM 2009 – 1st ACM International Workshop on Events in Multimedia, 23 October 2009, Beijing, China. (In Press).
  44. Laursen L. NEUROSCIENCE: A Memorable Device. Science 13 March 2009: 1422-1423.
  45. Kelly L. Searching Heterogeneous Human Digital Memory Archives
  46. Ljungblad S. Passive photography from a creative perspective: “If I would just shoot the same thing for seven days, it’s like… What’s the point?” Conference on Human Factors in Computing Systems archive Proceedings of the 27th international conference on Human factors in computing systems.
  47. Blighe, Michael Organising and structuring a visual diary using visual interest point detectors. PhD thesis, Dublin City University. March 2009.
  48. Doherty, Aiden R. Providing effective memory retrieval cues through automatic structuring and augmentation of a lifelog of images. PhD thesis, Dublin City University. March 2009.
  49. O Conaire C, Blighe M and O’Connor N. SenseCam Image Localisation using Hierarchical SURF Trees. MMM 2009 – 15th international Multimedia Modeling Conference, Sophia-Antipolis, France, 7-9 January 2009.
  50. Aiden R. Doherty and Alan F. Smeaton. Utilising Wearable Sensor Technology to Provide Effective Memory Cues. European Research Consortium for Informatics and Mathematics ERCIM NEWS 76 January 2009.
  51. Kumpulainen S, Jarvelin K, Serola S, Doherty A.R, Byrne D, Smeaton A.F, and Jones G. Data Collection Methods for Analyzing Task-Based Information Access in Molecular Medicine. MobiHealthInf 2009 – 1st International Workshop on Mobilizing Health Information to Support Healthcare-related Knowledge Work, Porto, Portugal, 16 January 2009.
  52. Puangpakisiri, W.; Yamasaki, T.;   Aizawa, K.; High level activity annotation of daily experiences by a combination of a wearable device and Wi-Fi based positioning system, IEEE International Conference on Multimedia and Expo, 2008
  53. Melissa Bowen, An investigation of the therapeutic efficacy of SenseCam as an autobiographical memory aid in a patient with medial temporal lobe amnesia. MSc Thesis, University of Exeter, September 2008.
  54. Byrne D, Doherty A.R., Snoek C.G.M., Jones G.F., and Smeaton A.F. Validating the Detection of Everyday Concepts in Visual Lifelogs. SAMT 2008 – 3rd International Conference on Semantic and Digital Media Technologies, Koblenz, Germany, 3-5 December 2008.
  55. Lee, M. L. and Dey, A. K. 2008. Lifelogging memory appliance for people with episodic memory impairment. In Proceedings of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 – 24, 2008). UbiComp ’08, vol. 344. ACM, New York, NY, 44-53.
  56. Byrne, Daragh and Lee, Hyowon and Jones, Gareth J.F. and Smeaton, Alan F. (2008) Guidelines for the presentation and visualisation of lifelog content. In: iHCI 2008 – Irish Human Computer Interaction Conference 2008, 19-20 September 2008, Cork, Ireland.
  57. Matthew L. Lee, Anind K. Dey, Wearable experience capture for episodic memory support. pp.107-108, 2008 12th IEEE International Symposium on Wearable Computers, 2008.
  58. Doherty A.R, Ó Conaire C, Blighe M, Smeaton A.F, and O’Connor N. Combining Image Descriptors to Effectively Retrieve Events from Visual Lifelogs. MIR 2008 – ACM International Conference on Multimedia Information Retrieval, Vancouver, Canada, 30-31 October 2008.
  59. Byrne, D. and Jones, G. J. 2008. Towards computational autobiographical narratives through human digital memories. In Proceeding of the 2nd ACM international Workshop on Story Representation, Mechanism and Context (Vancouver, British Columbia, Canada, October 31 – 31, 2008). SRMC ’08. ACM, New York.
  60. Bowen, M. An investigation of the therapeutic efficacy of SenseCam as an autobiographical memory aid in a patient with temporal lobe amnesia. University of Exeter MSc project. October 2008.
  61. Byrne D, Doherty A.R, Gareth J.F. Jones, Smeaton A.F, Kumpulainen S and Jarvelin K. The SenseCam as a Tool for Task Observation. HCI 2008 – 22nd BCS HCI Group Conference, Liverpool, U.K., 1-5 September 2008.
  62. Blighe M, Doherty A.R, Smeaton A.F and O’Connor N. Keyframe Detection in Visual Lifelogs. PETRA 2008 – 1st International Conference on Pervasive Technologies Related to Assistive Environments, Athens, Greece, 15-19 July 2008.
  63. Doherty A.R., Byrne D, Smeaton A.F., Jones G.J.F. and Hughes M. Investigating Keyframe Selection Methods in the Novel Domain of Passively Captured Visual Lifelogs. CIVR 2008 – ACM International Conference on Image and Video Retrieval, Niagara Falls, Canada, 7-9 July 2008.
  64. Blighe M and O’Connor N. MyPlaces: Detecting Important Settings in a Visual Diary. CIVR 2008 – ACM International Conference on Image and Video Retrieval, Niagara Falls, Canada, 7-9 July 2008.
  65. Blighe M, Sav S, Lee H, and O’Connor N. Mo Músaem Fíorúil: A Web-based Search and Information Service for Museum Visitors. ICIAR 2008 –International Conference on Image Analysis and REcognition, Povoa de Varzim, Portugal, 25-27 June 2008.
  66. Doherty A.R. and Smeaton A.F. Combining Face Detection and Novelty to Identify Important Events in a Visual LifeLog. CIT 2008 – IEEE International Conference on Computer and Information Technology, Workshop on Image- and Video-based Pattern Analysis and Applications, Sydney, Australia, 8-11 July 2008.
  67. Doherty A.R. and Smeaton A.F. Automatically Segmenting Lifelog Data Into Events. WIAMIS 2008 – 9th International Workshop on Image Analysis for Multimedia Interactive Services, Klagenfurt, Austria, 7-9 May 2008.
  68. Blighe M, O’Connor N, Rehatschek H and Kienast G. Identifying Different Settings in a Visual Diary. WIAMIS 2008 – 9th International Workshop on Image Analysis for Multimedia Interactive Services, Klagenfurt, Austria, 7-9 May 2008.
  69. Fuller M, Kelly L and Jones G. Applying Contextual Memory Cues for Retrieval from Personal Information Archives. PIM 2008 – Proceedings of Personal Information Management, Workshop at CHI 2008, Florence, Italy, 5-6 April 2008.
  70. Fleck, R. Exploring the Potential of Passive Image Capture to Support Reflection on Experience, DPhil Thesis and summary of thesis, Department of Psychology, University of Sussex, UK, Jan 2008.
  71. Gurrin C, Smeaton A.F, Byrne D, O’Hare N, Jones G and O’Connor N. An Examination of a Large Visual Lifelog. AIRS 2008 – Asia Information Retrieval Symposium, Harbin, China, 16-18 January 2008.
  72. Matthew L. Lee , Anind K. Dey, Using lifelogging to support recollection for people with episodic memory impairment and their caregivers. Proceedings of the 2nd International Workshop on Systems and Networking Support for Health Care and Assisted Living Environments, 2008.
  73. Lee H, Smeaton A.F, O’Connor N, Jones G, Blighe M, Byrne D, Doherty A.R, and Gurrin G. Constructing a SenseCam Visual Diary as a Media Process. Multimedia Systems Journal, Special Issue on Canonical Processes of Media Production, 2008.
  74. Deborah Barreau, Abe Crystal, Jane Greenberg, Anuj Sharma, Michael Conway, John Oberlin, Michael Shoffner and Stephen Seiberling. Augmenting Memory for Student Learning: Designing a Context-Aware Capture System for Biology Education, Proceedings of the American Society for Information Science and Technology, Volume 43, Issue 1, Pages 251–251, October 2007.
  75. Byrne D, Lavelle B, Doherty A, Jones G and Smeaton A.F. Using Bluetooth and GPS Metadata to Measure Event Similarity in SenseCam Images. Accepted for presentation at IMAI’07 – 5th International Conference on Intelligent Multimedia and Ambient Intelligence, Salt Lake City, Utah, 18-24 July, 2007.
  76. Doherty A, Smeaton A.F, Lee K, and Ellis D. Multimodal Segmentation of Lifelog Data. Accepted for presentation at 8th RIAO Conference – Large-Scale Semantic Access to Content (Text, Image, Video and Sound), Pittsburgh, PA, 30 May – 1 June, 2007.
  77. Kelly L. The Information Retrieval Challenge of Human Digital Memories. BCS IRSG Symposium: Future Directions in Information Access 2007, Glasgow, Scotland, 28-29 August 2007.
  78. Byrne, D. SenseCam Flow Visualisation for LifeLog Image Browsing. BCS IRSG Informer, Spring Issue (No. 22).
  79. O’Conaire C, O’Connor N, Smeaton A.F. and Jones G. Organising a daily Visual Diary Using Multi-Feature Clustering. SPIE Electronic Imaging – Multimedia Content Access: Algorithms and Systems (EI121), San Jose, CA, 28 January – 1 February 2007.
  80. Matthew L. Lee and Anind K. Dey, Providing good memory cues for people with episodic memory impairment, ACM SIGACCESS Conference on Assistive Technologies Archive, Proceedings of the 9th international ACM SIGACCESS Conference on Computers and Accessibility, 2007.
  81. Smeaton A.F, O’Connor N, Jones G, Gaughan G, Lee H and Gurrin C. SenseCam Visual Diaries Generating Memories for life. Poster presented at the Memories for Life Colloquium 2006, British Library Conference Centre, London, U.K., 12 December 2006. Memories For Life Website(poster)
  82. Smeaton A.F, Diamond D and Smyth B. Computing and Material Sciences for LifeLogging. Presented at the Memories for Life Network Workshop 2006, British Library Conference Centre, London, U.K., 11 December 2006.
  83. Smeaton A.F., Content Vs. Context For Multimedia Semantics: The Case of SenseCam Image Structuring. SAMT 2006 – Proceedings of The First International Conference on Semantics And Digital Media Technology. Lecture Notes in Computer Science (LNCS), Athens, Greece, 6-8 December 2006.
  84. Fleck, R & Fitzpatrick, G., (2006). Supporting reflection with passive image capture. Supplementary proceedings of COOP’06, Carry-le-Rouet, France. pp.41-48.
  85. Hyowon Lee, Alan F. Smeaton, Noel E. O’Connor and Gareth J.F. Jones. Adaptive Visual Summary of LifeLog Photos for Personal Information Management. AIR 2006 – First International Workshop on Adaptive Information Retrieval, Glasgow, U.K., 14 October 2006. (poster)
  86. A. Tjoa, A. Andjomshoaa, S. Karim. Exploiting SenseCam for Helping the Blind in Business Negotiations, Computers Helping People with Special Needs, Springer, p. 1147 – 1154, 2006.
  87. Blighe M, Le Borgne H, O’Connor N, Smeaton A.F and Jones G. Exploiting Context Information to aid Landmark Detection in SenseCam Images. ECHISE 2006 – 2nd International Workshop on Exploiting Context Histories in Smart Environments – Infrastructures and Design, 8th International Conference of Ubiquitous Computing (Ubicomp 2006), Orange County, CA, 17-21 September 2006.
  88. Seungwon Yang, Ben Congleton, George Luc, Manuel A. Pérez-Quiñones, Edward A. Fox, Demonstrating the use of a SenseCam in two domains, International Conference on Digital Libraries archive, Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries, 2006.
  89. Ashbrook, D.; Lyons, K.; Clawson, J. Capturing Experiences Anytime, Anywhere. IEEE Pervasive Computing Magazine, Volume 5, Issue 2, April-June 2006.
  90. Fleck, R & Fitzpatrick, G. Supporting reflection with passive image capture. Supplementary proceedings of COOP’06, Carry-le-Rouet, France. pp.41-48. 2006.
  91. Lee, M & Dey, A. Capturing and Reviewing Context in Memory Aids. April 2006.
  92. Cherry, S. Total recall life recording software, IEEE Spectrum, volume 42, pp24-30, November 2005.
  93. Fleck, R. Exploring SenseCam to inform the design of image capture and replay devices for supporting reflection In E. Martinez-Miron and D. Brewster (Eds) Advancing the potential for communication, learning and interaction, 8th Human Centred Technology Postgraduate Workshop, Department of Informatics, University of Sussex, Brighton, UK, 2005.