Research
研究プロジェクト・論文・書籍等
- 論文
How Close are Other Computer Vision Tasks to Deepfake Detection?
- #画像処理
- #ディープフェイク検知
IEEE International Joint Conference on Biometrics (IJCB 2023)
In this paper, we challenge the conventional belief that supervised ImageNet-trained backbones have strong generalizability and are suitable for use as feature extractors in deepfake detection models. We present a new measurement, “backbone separability,” for visually and quantitatively assessing a backbone’s raw capacity to separate data in an unsupervised manner. We also present a systematic benchmark for determining the correlation between deepfake detection and other computer vision tasks using backbones from pre-trained models. Our analysis shows that before fine-tuning, face recognition backbones are more closely related to deepfake detection than other backbones. Additionally, backbones trained using self-supervised methods are more effective in separating deepfakes than those trained using supervised methods. After fine-tuning all backbones on a small deepfake dataset, we found that self-supervised backbones deliver the best results, but there is a risk of overfitting. Our results provide valuable insights that should help researchers and practitioners develop more effective deepfake detection models.