NavigaTone: Seamlessly embedding navigation cues in mobile music listening

  • As humans, we have the natural capability of localizing the origin of sounds. Spatial audio rendering leverages this skill by applying special filters to recorded audio to create the impression that a sound emanates from a certain position in the physical space. A main application for spatial audio on mobile devices is to provide non-visual navigation cues. Current systems require users to either listen to artificial beacon sounds, or the entire audio source (e.g., a song) is repositioned in space, which impacts the listening experience. We present NavigaTone, a system that takes advantage of multi-track recordings and provides directional cues by moving a single track in the auditory space. While minimizing the impact of the navigation component on the listening experience, a user study showed that participants could localize sources as good as with stereo panning while the listening experience was rated to be closer to common music listening.

Export metadata

Additional Services

Share in X Search Google Scholar
Metadaten
Author:Florian HellerORCiD, Johannes Schöning
DOI:https://doi.org/10.1145/3173574.3174211
ISBN:978-1-4503-5620-6
Parent Title (English):Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
Publisher:ACM
Place of publication:New York, NY
Document Type:Conference Proceeding
Language:English
Year of Completion:2018
Tag:audio augmented reality; mobile devices; navigation; spatial audio; virtual audio spaces
Article Number:637
Note:
CHI '18: CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, April 21 - 26, 2018
Link:https://doi.org/10.1145/3173574.3174211
Zugriffsart:campus
Institutes:FH Aachen / Fachbereich Elektrotechnik und Informationstechnik
collections:Verlag / ACM