About AI FORCE

The AI Forensics Open Research Challenge Evaluations (AI FORCE) are a series of publicly available challenges related to generative AI capabilities. Coinciding with DARPA's goal to develop techniques to mitigate the threats posed by state-of-the-art AI systems, the AI FORCE challenges will help DARPA and the broader community work together in research that will result in safe, secure, and trustworthy AI systems.

We will publish a series of tasks in 4-week increments. Tasks will involve the detection and attribution of generative AI media including images, audio, and video.

This effort is being funded by DARPA’s Semantic Forensics (SemaFor) program. Additional information about SemaFor is available on DARPA's website.

Challenge 3

Upcoming

Avatar Attribution Synthetic Talking-Head Videos

Overview

While unauthorized and malicious synthetically-generated deepfake content of persons is becoming more and more common, the development and use of tools to create legitimate and authorized synthetically-generated content is also on the rise. These legitimate and authorized tools and content can be used for everything from simply touching up a persons appearance in applications such as video conferencing and virtual meetings, to creating synthetically-generated content of actors for products such as online training content and other publications. This differs from general deepfake video detection as that generally involves determining if a video is real or a synthetically-generated deepfake. This approach assumes that some synthetically-generated videos are created by or authorized to be created by the individual in the video.

Challenge Description

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect and classify self reenactment vs. cross reenactment or attribute whether the video is being driven by the person shown in the video (target identity) or by someone else (puppeteer).

Research Questions

  • Is it possible to develop and train analytics and capabilities to determine if a synthetically-generated video of a person was generated in an authorized manner?
  • Is it possible to attribute a synthetic talking-head video or “Avatar“?
  • Is it possible to attribute a given Avatar to an identity?

Schedule

  • Registration Opens: June 2024
  • Start Date: July 2024

Note: All deadlines are at 11:59 PM EST on the corresponding day unless otherwise noted. The challenge organizers reserve the right to update the contest timeline if they deem it necessary.

All Challenges

Challenge 1

5/6/24 - 5/31/24

AI-Generated Image Detection

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect synthetic AI-generated images. Participants will be challenged to build models that can discern between real images, including ones that may have been manipulated or edited using non-AI methods and fully synthetic AI-generated images.

Registration Open

Challenge 2

6/3/24 - 6/26/24

AI-Generated Image Model Attribution

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately attribute and classify synthetic AI-generated images to a specific base model. Participants will be challenged to build capabilities that can attribute AI-generated images to known available models

Upcoming

Challenge 3

July 2024

Avatar Attribution Synthetic Talking-Head Videos

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect and classify self reenactment vs. cross reenactment or attribute whether the video is being driven by the person shown in the video (target identity) or by someone else (puppeteer).

Upcoming

Challenge 4

August 2024

Detection of AI-Generated Synthetic Audio

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect AI-generated audio from real audio. Participants will be challenged to build models that can discern between real audio or audio that was created by generative AI regardless if that audio is of a real person or not.

Upcoming

This website is not a Department of Defense (DoD) or Defense Advanced Research Projects Agency (DARPA) website and is hosted by a third-party non-government entity. The appearance of the DARPA logo does not constitute endorsement by the DoD/DARPA of non-U.S. government information, products, or services. Although the host may or may not use this site as additional distribution channels for information, the DoD/DARPA does not exercise editorial control over all information you may encounter.