About AI FORCE

The AI Forensics Open Research Challenge Evaluations (AI FORCE) are a series of publicly available challenges related to generative AI capabilities. Coinciding with DARPA's goal to develop techniques to mitigate the threats posed by state-of-the-art AI systems, the AI FORCE challenges will help DARPA and the broader community work together in research that will result in safe, secure, and trustworthy AI systems.

We will publish a series of tasks in 4-week increments. Tasks will involve the detection and attribution of generative AI media including images, audio, and video.

This effort is being funded by DARPA’s Semantic Forensics (SemaFor) program. Additional information about SemaFor is available on DARPA's website.

Challenge 4

Upcoming

Detection of AI-Generated Synthetic Audio

Overview

As with most generative AI, there are numerous benefits that it offers to users. The use of generative AI to produce synthetic audio and voice clones of people is not different. This technology allows users to create synthetic audio of non-existent people, themselves, and others. However when users of these technology create synthetic audio for malicious purposes or voice clones of people without their consent, that represents a significant threat and misuse of the technology. The ability to detect if audio had been generated by AI or if audio is a voice clone of a person is becoming a necessary step in the ability to differentiate between what is real and fake in the media landscape.

Challenge Description

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect AI-generated audio from real audio. Participants will be challenged to build models that can discern between real audio or audio that was created by generative AI regardless if that audio is of a real person or not.

Research Questions

  • Is it possible to differentiate between AI-generated audio of a person (speech) and real human speech?
  • Does audio post-processing of real human speech impact the ability to differentiate between real and AI-generated speech.

Schedule

  • Registration Opens: July 2024
  • Start Date: August 2024

Note: All deadlines are at 11:59 PM EST on the corresponding day unless otherwise noted. The challenge organizers reserve the right to update the contest timeline if they deem it necessary.

All Challenges

Challenge 1

5/6/24 - 6/30/24

AI-Generated Image Detection

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect synthetic AI-generated images. Participants will be challenged to build models that can discern between real images, including ones that may have been manipulated or edited using non-AI methods and fully synthetic AI-generated images.

Registration Open

Challenge 2

6/3/24 - 6/26/24

AI-Generated Image Model Attribution

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately attribute and classify synthetic AI-generated images to a specific base model. Participants will be challenged to build capabilities that can attribute AI-generated images to known available models

Upcoming

Challenge 3

July 2024

Avatar Attribution Synthetic Talking-Head Videos

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect and classify self reenactment vs. cross reenactment or attribute whether the video is being driven by the person shown in the video (target identity) or by someone else (puppeteer).

Upcoming

Challenge 4

August 2024

Detection of AI-Generated Synthetic Audio

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect AI-generated audio from real audio. Participants will be challenged to build models that can discern between real audio or audio that was created by generative AI regardless if that audio is of a real person or not.

Upcoming

This website is not a Department of Defense (DoD) or Defense Advanced Research Projects Agency (DARPA) website and is hosted by a third-party non-government entity. The appearance of the DARPA logo does not constitute endorsement by the DoD/DARPA of non-U.S. government information, products, or services. Although the host may or may not use this site as additional distribution channels for information, the DoD/DARPA does not exercise editorial control over all information you may encounter.