FaceSigns: Semi-Fragile Watermarks for Media AuthenticationJust Accepted

ACM Transactions on Multimedia Computing, Communications, and Applications(2023)

引用 0|浏览6
暂无评分
摘要
Manipulated media is becoming a prominent threat due to the recent advances in realistic image and video synthesis techniques. There have been several attempts at detecting synthetically tampered media using machine learning classifiers. However, such classifiers do not generalize well to black-box image synthesis techniques and have been shown to be vulnerable to adversarial examples. To address these challenges, we introduce FaceSigns — a deep learning based semi-fragile watermarking technique that allows media authentication by verifying an invisible secret message embedded in the image pixels. Instead of identifying and detecting manipulated media using visual artifacts, we propose to proactively embed a semi-fragile watermark into a real image or video so that we can prove its authenticity when needed. FaceSigns is designed to be fragile to malicious manipulations or tampering while being robust to benign operations such as image/video compression, scaling, saturation, contrast adjustments etc. This allows images and videos shared over the internet to retain the verifiable watermark as long as a malicious modification technique is not applied. We demonstrate that our framework can embed a 128 bit secret as an imperceptible image watermark that can be recovered with a high bit recovery accuracy at several compression levels, while being non-recoverable when unseen malicious manipulations are applied. For a set of unseen benign and malicious manipulations studied in our work, our framework can reliably detect manipulated content with an AUC score of 0.996 which is significantly higher than prior image watermarking and steganography techniques.
更多
查看译文
关键词
media forensics,Deepfakes,watermarking,semi-fragile watermarking,video watermarking
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要