Participation Instructions - Media Analytics

General Rules

  1. Participants are required to train their models exclusively on the provided training dataset. The evaluation must be conducted on the provided test datasets. The use of pre-trained models is permitted, as long as the pre-training originates from a distinct domain of deepfake detection (for example pre-trained ResNet on ImageNet, CLIP etc...).
  2.  If required, participants must provide all the necessary code for training, and pre-set random seeds to ensure accurate reproducibility. All submitted models and results are subject to re-evaluation.
  3. Models that have already been submitted can undergo reevaluation in subsequent rounds by submitting the predicted labels and scores for evaluation.

Track 1 - Classification of totally generated images 

  1. The task involves the recognition of deepfakes in which the entire image is generated at once.
  2. The binary classification task involves distinguishing between fake images and real images using machine learning and deep learning-based approaches.
  3. Submissions file format.
    • The submission must be uploaded as a JSON file. The keys are the file names of the test set and the values are the predicted class labels (either 0 or 1) The class label 0 identify the image as real and the class label 1 identifies the image as fake.The JSON should be in the following format: Submission JSON template
  4. Metrics:
    • F1 Score: this metric is computed as the harmonic mean of Precision and Recall, and it is particularly suited for evaluating in a single metric the binary classification performance on unbalanced datasets.

(NEW) Track 2 - Classification of totally generated images on multiple generators

  1. The task involves the recognition of deepfakes in which the entire image is generated at once.
  2. The binary classification task involves distinguishing between fake images and real images using machine learning and deep learning-based approaches.
  3.  Release of a new training set involving 4 different generators for a total of 9.2M generated images.
  4. Submissions file format.
    • The submission must be uploaded as a JSON file. The keys are the file names of the test set and the values are the predicted class labels (either 0 or 1) The class label 0 identify the image as real and the class label 1 identifies the image as fake.The JSON should be in the following format: Submission JSON template
  5. Metrics:
    • Accuracy diffusion models: this metric is computed as the arithmetic mean of the accuracy on real images and diffusion generated images.
    • Accuracy gan models: this metric is computed as the arithmetic mean of the accuracy on real images and gan generated images from [2].

Track 3 - XAI evaluation of deepfake detectors (starting soon) 

  1. The task will involve the evaluation of deepfake detector saliency maps.
  2. Evaluation will be conducted through both human supervision and automatic metrics like ADCC[1].

Track 4 - Classification of partially altered or generated images (starting soon) 

  1. The task involves the recognition of deepfakes in which both real images and generated images are partially edited.
  2. The binary classification task involves distinguishing between edited images and real images using machine learning and deep learning-based approaches.

 

Acknowledge

[1] Poppi, et al: “Revisiting the Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis”, 2021 CVPRW;

[2] Wang, et al: "CNN-generated images are surprisingly easy to spot... for now", 2020 CVPR;

Important Dates

March 31, 2024: Registration for the competition is now open on Track 2.

September 18, 2023: Track 1 submission deadline for the ICCV Workshop 

July 17, 2023: Registration for the competition is now open on track 1.

July 15, 2023: Release of the dataset.