A Comprehensive Multimodal Humanoid System for Personality Assessment Based on the Big Five Model

IEEE Access(2024)

引用 0|浏览3
暂无评分
摘要
Personality analysis allows the experts to get insights into an individual’s conduct, vulnerabilities, and prospective capabilities. Some common methods employed for personality prediction include text analysis, social media data, facial expressions, and emotional speech extraction. Recently, some studies have utilized the big five model to predict personality traits using non-verbal cues (gaze score, body motion, head motion). However, these studies mostly target only three aspects of the big five mode. None of the studies so far have used non-verbal cues to target all five traits (extraversion, openness, neuroticism, agreeableness, and conscientiousness) of the Big Five model. In this paper, we propose a multi-modal system that predicts all five personality traits of the Big Five model using non-verbal cues (facial expressions, head poses, body poses), 44-item Big Five Inventory (BFI) questionnaire, and expert analysis. The facial expression module utilizes the Face Emotion Recognition Plus (FER+) dataset trained with Convolution Neural Network (CNN) model achieving 95.14% accuracy. Evaluating 16 subjects in verbal interaction with humanoid robot NAO, we combined questionnaire feedback, human-robot interaction data, and expert perspectives to deduce their Big Five traits. Findings reveal 100% accuracy in personality prediction via expert insights and the system, and 75% for the questionnaire-based approach.
更多
查看译文
关键词
Big-Five Model,Human-Robot Interaction,Non-Verbal Cues,Personality Prediction,Personality Traits
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要