High-Level Analysis of Audio Features for Identifying Emotional Valence in Human Singing

Cunningham, Stuart, Weinel, Jonathan and Picking, Rich (2018) High-Level Analysis of Audio Features for Identifying Emotional Valence in Human Singing. In: Audio Mostly 2018 (AM'18), 12-14 Sept 2018, Wrexham Glyndŵr University, UK.

[img]
Preview
Text
GURO_391_SteveN_AM_camRdy_FINAL.pdf - Accepted Version

Download (286kB) | Preview

Abstract

Emotional analysis continues to be a topic that receives much attention in the audio and music community. The potential to link together human affective state and the emotional content or intention of musical audio has a variety of application areas in fields such as improving user experience of digital music libraries and music therapy. Less work has been directed into the emotional analysis of human acapella singing. Recently, the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) was released, which includes emotionally validated human singing samples. In this work, we apply established audio analysis features to determine if these can be used to detect underlying emotional valence in human singing. Results indicate that the short-term audio features of: energy; spectral centroid (mean); spectral centroid (spread); spectral entropy; spectral flux; spectral rolloff; and fundamental frequency can be useful predictors of emotion, although their efficacy is not consistent across positive and negative emotions.

Item Type: Conference or Workshop Item (Paper)
Keywords: Applied computing, Arts and humanities, Sound and music computing, Interactive systems and tools, Interaction techniques, Human computer interaction (HCI), Human-centered computing, CCS for this Article.
Depositing User: Hayley Dennis
Date Deposited: 13 May 2019 14:02
Last Modified: 13 May 2019 14:23
URI: https://glyndwr.repository.guildhe.ac.uk/id/eprint/17412

Actions (login required)

Edit Item Edit Item