MetaHumans help to evaluate deepfake generators

Sahar Husseini,Jean-Luc Dugelay

2023 IEEE 25TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, MMSP(2023)

引用 0|浏览3
暂无评分
摘要
The progress achieved in deepfake technology has been remarkable; however, evaluating the resulting videos and comparing different generators remains challenging. A primary concern arises from the lack of ground-truth data, except for self-reenactment scenarios. Additionally, available datasets may have inherent limitations, such as lacking expected animations or demonstrating inadequate subject diversity. Furthermore, there are ethical and privacy concerns when using real individuals' faces in such applications. This paper goes beyond the state-of-the-art dealing with the evaluation of deepfake generators by introducing an innovative dataset featuring MetaHumans. Our dataset ensures the availability of ground-truth data and encompasses diverse facial expressions, variations in pose and illumination conditions, and combinations of these factors. Additionally, we meticulously control and verify the expected animations within the dataset. The proposed dataset enables accurate evaluation of cross-reenactment generated images. By utilizing various established metrics, we demonstrate a high degree of correlation between the generator's scores obtained from deepfake videos of Metahumans and those obtained from deepfake videos of real persons. The synthesized MetaHuman dataset can be accessed at: https://github.com/SaharHusseini/MMSP_2023
更多
查看译文
关键词
Deepfake,Face reenactment,Face Animation,Evaluation,MetaHumans
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要