A deep learning model for depression detection based on MFCC and CNN generated spectrogram features

BIOMEDICAL SIGNAL PROCESSING AND CONTROL(2024)

引用 0|浏览5
暂无评分
摘要
Depression is one of the leading forms of mental health issues encountered by individuals of diverse age groups today worldwide. Like any other mental health concerns, depression too poses diagnostic challenges for medical practitioners and clinical experts, given obvious social reservations and lack of awareness and acceptance in the society. Since long researchers have been looking for methods to identify symptoms of depression among individuals from their speech and responses, by utilizing automation systems and computers. In this paper, we propose an audio based depression detection method, which relies on neural networks for audio spectrogram based feature extraction as well as classification between speech/response patterns of depressed vs. non-depressed persons. We adopt a multi-modal approach in our work, by combining Mel-Frequency Cepstral Coefficients (MFCC) features, as well as Spectrogram features extracted from an audio file, by a novel CNN network. Our CNN model demonstrates optimized residual blocks and the "glorot uniform"kernel initializer. The proposed method's performance is assessed in both multi-modal and multi-feature trials. We show our results on standard benchmark datasets DAIC-WOZ and MODMA, which provide repositories of questionnaire and patient responses, relevant in identification of depressive symptoms. We have also tested our model on standard emotion recognition audio dataset, RAVDESS. The proposed model achieves detection accuracy of over 90% in DAIC-WOZ and MODMA, and over 85% in RAVDESS, which is proven to surpass the present state-of-the-art.
更多
查看译文
关键词
Audio signals,Convolutional neural network,Depression detection,Deep learning,Spectrogram,Glorot uniform
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要