谷歌浏览器插件
订阅小程序
在清言上使用

A Noval Feature via Color Quantisation for Fake Audio Detection

Zhiyong Wang, Xiaopeng Wang,Yuankun Xie,Ruibo Fu,Zhengqi Wen,Jianhua Tao, Yukun Liu, Guanjun Li, Xin Qi, Yi Lu,Xuefei Liu, Yongwei Li

arxiv(2024)

引用 0|浏览1
暂无评分
摘要
In the field of deepfake detection, previous studies focus on using reconstruction or mask and prediction methods to train pre-trained models, which are then transferred to fake audio detection training where the encoder is used to extract features, such as wav2vec2.0 and Masked Auto Encoder. These methods have proven that using real audio for reconstruction pre-training can better help the model distinguish fake audio. However, the disadvantage lies in poor interpretability, meaning it is hard to intuitively present the differences between deepfake and real audio. This paper proposes a noval feature extraction method via color quantisation which constrains the reconstruction to use a limited number of colors for the spectral image-like input. The proposed method ensures reconstructed input differs from the original, which allows for intuitive observation of the focus areas in the spectral reconstruction. Experiments conducted on the ASVspoof2019 dataset demonstrate that the proposed method achieves better classification performance compared to using the original spectral as input and pretraining the recolor network can also benefit the fake audio detection.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要