We propose a method to extract particular sound segments from the sound recorded during the course of a day in order to provide sound segments that can be used to facilitate memory. To extract important parts of the sound data, the proposed method utilizes human behavior based on a multisensing approach. To evaluate the performance of the proposed method, we conducted experiments using sound, acceleration, and global positioning system data collected by five participants for approximately two weeks. The experimental results are summarized as follows: (1) various sounds can be extracted by dividing a day into scenes using the acceleration data; (2) sound recorded in unusual places is preferable to sound recorded in usual places; and (3) speech is preferable to nonspeech sound.

Original languageEnglish
Title of host publicationHCI International 2015 – Posters Extended Abstracts - International Conference, HCI International 2015, Proceedings
EditorsConstantine Stephanidis
PublisherSpringer Verlag
Number of pages7
ISBN (Print)9783319213798
Publication statusPublished - Jan 1 2015
Event17th International Conference on Human Computer Interaction, HCI 2015 - Los Angeles, United States
Duration: Aug 2 2015Aug 7 2015

Publication series

NameCommunications in Computer and Information Science
ISSN (Print)1865-0929


Other17th International Conference on Human Computer Interaction, HCI 2015
Country/TerritoryUnited States
CityLos Angeles


  • Acceleration
  • GPS
  • Life-log
  • Multisensing
  • Sound
  • Syllable Count

ASJC Scopus subject areas

  • Computer Science(all)
  • Mathematics(all)


Dive into the research topics of 'Extraction of key segments from day-long sound data'. Together they form a unique fingerprint.

Cite this