Investigating automatic speech emotion recognition for children with autism spectrum disorder in interactive intervention sessions with the social robot Kaspar

user-5f8411ab4c775e9685ff56d3(2022)

引用 0|浏览11
暂无评分
摘要
In this contribution, we present the analyses of vocalisation data recorded in the first observation round of the European Commission's Erasmus Plus project "EMBOA, Affective loop in Socially Assistive Robotics as an intervention tool for children with autism". In total, the project partners recorded data in 112 robot-supported intervention sessions for children with autism spectrum disorder. Audio data were recorded using the internal and lapel microphone of the H4n Pro Recorder. To analyse the data, we first utilise a child voice activity detection (VAD) system in order to extract child vocalisations from the raw audio data. For each child, session, and microphone, we provide the total time child vocalisations were detected. Next, we compare the results of two different implementations for valence- and arousal-based speech emotion recognition, thereby processing (1) the child vocalisations detected by the VAD and (2) the total recorded audio material. We provide average valence and arousal values for each session and condition. Finally, we discuss challenges and limitations of child voice detection and audio-based emotion recognition in robot-supported intervention settings.
更多
查看译文
关键词
automatic speech emotion recognition,autism spectrum disorder,interactive intervention sessions,robot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要