Psychophysiological Effect of Immersive Audio Experience Enhanced Using Sound Field SynthesisView Publication
Recent advancements of spatial audio technologies to enhance human’s emotional and immersive experiences are gathering attention. Many studies are clarifying the neural mechanisms of acoustic spatial perception; however, they are limited to the evaluation of mechanisms using basic sound stimuli. Therefore, it remains challenging to evaluate the experience of actual music contents and to verify the effects of higher-order neurophysiological responses including a sense of immersive and realistic experience. To investigate the effects of spatial audio experience, we verified the psychophysiological responses of immersive spatial audio experience using sound field synthesis (SFS) technology. Specifically, we evaluated alpha power as the central nervous system activity, heart rate/heart rate variability and skin conductance as the autonomic nervous system activity during an acoustic experience of an actual music content by comparing stereo and SFS conditions. As a result, statistically significant differences (p < 0.05) were detected in the changes in alpha wave power, high frequency wave power of heart rate variability (HF), and skin conductance level (SCL) among the conditions. The results of the SFS condition showed enhanced the changes in alpha power in the frontal and parietal regions, suggesting enhancement of emotional experience. The results of the SFS condition also suggested that close objects are grouped and perceived on the basis of the spatial proximity of sounds in the presence of multiple sound sources. It is demonstrating that the potential use of SFS technology can enhance emotional and immersive experiences by spatial acoustic expression.
Related PublicationsView All
STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events
Kazuki Shimada, Archontis Politis*, Parthasaarathy Sudarsanam*, Daniel Krause*, Kengo Uchida, Sharath Adavanne*, Aapo Hakala*, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Tuomas Virtanen*, Yuki MitsufujiWhile direction of arrival (DOA) of sound events is generally estimated from multichannel audio data recorded […]
Automatic Piano Transcription with Hierarchical Frequency-Time Transformer
Keisuke Toyama, Taketo Akama, Yukara Ikemiya, Yuhta Takida, Wei-Hsiang Liao, Yuki MitsufujiTaking long-term spectral and temporal dependencies into account is essential for automatic piano transcriptio […]