Emotional Responses in Podcast Sound Design: A Frequency-Based Analysis of the Host's Voice Spectrum
คำสำคัญ:
Sound Design, Acoustic Analysis, Emotional Response, Podcast, Media Studiesบทคัดย่อ
Sound design is essential in digital media for influencing listeners' perceptions of sound and their emotions, particularly in podcasting. This study looks at how podcast hosts' voice images are affected by acoustic spectrum analysis. We demonstrate that the voice of the host of the Happy Planet podcast exhibits severe oscillations in its high-frequency components, which are primarily above 2 kHz with a core frequency surpassing 3 kHz. Emotional engagement and clarity are improved by this spectral profile. On the other hand, the mid-frequency spectrum is smoother and adds to a pleasing aural experience. It spans from 500 Hz to 2 kHz, with a center frequency of approximately 1 kHz.
According to our research, mid-frequency components offer comfort and efficient emotional transmission, while high-frequency components are essential for enhancing sound recognition and emotional expression. Optimized high-frequency spectra lead to a 22% increase in listener retention and a 30% increase in emotional connection, according to quantitative studies. Gender disparities were noted: male listeners preferred sounds with a lot of bass, whereas female listeners preferred a balance of high and mid-frequency components. This study demonstrates how frequency modulation can modulate emotional impact and listener involvement, supporting Ma and Thompson's (2015) approach that links sound qualities to psychological responses. Our findings highlight the value of customized sound design in podcast production and provide guidance for improving emotional resonance and listener experiences. This study highlights the potential of sound design to enhance digital media content while also advancing the theoretical knowledge of its role in media and communication.
References
Bradley, M. M., & Lang, P. J. (2019). Measuring emotion: The Self-Assessment Manikin and the Semantic Differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1), 49-59.
Koelsch, S. (2021). Brain correlates of music-evoked emotions. Nature Reviews Neuroscience, 15(3), 170-180.
Lerner, Y., Honey, C. J., Silbert, L. J., & Hasson, U. (2016). Topographic mapping of a hierarchy of temporal receptive windows using a narrated story. Journal of Neuroscience, 31(8), 2906-2915.
Scherer, K. R. (2017). Vocal communication of emotion: A review of research paradigms. Speech Communication, 40(1-2), 227-256.
Zatorre, R. J., & Salimpoor, V. N. (2018). From perception to pleasure: Music and its neural substrates. Proceedings of the National Academy of Sciences, 110(Supplement 2), 10430-10437.
Collins, K. (2020). Studying sound: A theory and practice of sound design. In Studying sound: A theory and practice of sound design (pp. 1-7). MIT Press.
Gibson, D., & Polfreman, R. (2021). Analyzing journeys in sound: Usability of graphical interpolators for sound design. Personal and Ubiquitous Computing.https://eprints.soton.ac.uk/453719/
Tse, Y. T., Tan, L. S., Wachsmuth, M. M., & Tugade, M. M. (2022). Emotional nuance: Examining positive emotional granularity and well-being. Frontiers in Psychology. https://mc.ncbi.nlm.nih.gov/article/pmc8901891
Sek, A., & Moore, B. C. J. (2020). Psychoacoustics: Software package for psychoacoustics. Acoustical Science and Technology. https://www.jstage.jst.go.jp/article/ast/41/1/41_E19214/_pdf
Ma, W., & Thompson, W. F. (2015). Human emotions track changes in the acoustic environment. Proceedings of the National Academy of Sciences of the United States of America, 112(47), 14563-14568.