Participation Instructions - Media Analytics
General Rules
- Participants are required to train their models exclusively on the provided training dataset. The evaluation must be conducted on the provided test datasets. The use of pre-trained models is permitted, as long as the pre-training originates from a distinct domain of deepfake detection (for example pre-trained ResNet on ImageNet, CLIP etc...).
- If required, participants must provide all the necessary code for training, and pre-set random seeds to ensure accurate reproducibility. All submitted models and results are subject to re-evaluation.
- Models that have already been submitted can undergo reevaluation in subsequent rounds by submitting the predicted labels and scores for evaluation.
Track 1 - Classification of totally generated images
- The task involves the recognition of deepfakes in which the entire image is generated at once.
- The binary classification task involves distinguishing between fake images and real images using machine learning and deep learning-based approaches.
- Submissions file format.
- The submission must be uploaded as a JSON file. The keys are the file names of the test set and the values are the predicted class labels (either 0 or 1) The class label 0 identify the image as real and the class label 1 identifies the image as fake.The JSON should be in the following format: Submission JSON template
- Metrics:
- F1 Score: this metric is computed as the harmonic mean of Precision and Recall, and it is particularly suited for evaluating in a single metric the binary classification performance on unbalanced datasets.
(NEW) Track 2 - Classification of totally generated images on multiple generators
- The task involves the recognition of deepfakes in which the entire image is generated at once.
- The binary classification task involves distinguishing between fake images and real images using machine learning and deep learning-based approaches.
- Release of a new training set involving 4 different generators for a total of 9.2M generated images.
- Submissions file format.
- The submission must be uploaded as a JSON file. The keys are the file names of the test set and the values are the predicted class labels (either 0 or 1) The class label 0 identify the image as real and the class label 1 identifies the image as fake.The JSON should be in the following format: Submission JSON template
- Metrics:
- Accuracy diffusion models: this metric is computed as the arithmetic mean of the accuracy on real images and diffusion generated images.
- Accuracy gan models: this metric is computed as the arithmetic mean of the accuracy on real images and gan generated images from [1].
Acknowledge
[1] Wang, et al: "CNN-generated images are surprisingly easy to spot... for now", 2020 CVPR;
Challenge News
- 03/31/2024
Track 2: Evaluation opened on ELSA Benchmark - 01/02/2024
Release of Diffusion-generated Deepfake Detection dataset (D3) - 10/02/2023
Workshop and Challenge on DeepFake Analysis and Detection (ICCV) - 09/18/2023
Submissions selection for ICCV workshop (DFAD2023) - 07/18/2023
Track 1: release of the first version of the dataset.
Important Dates
March 31, 2024: Registration for the competition is now open on Track 2.
September 18, 2023: Track 1 submission deadline for the ICCV Workshop
July 17, 2023: Registration for the competition is now open on track 1.
July 15, 2023: Release of the dataset.