About AI FORCE

The AI Forensics Open Research Challenge Evaluations (AI FORCE) are a series of publicly available challenges related to generative AI capabilities. Coinciding with DARPA's goal to develop techniques to mitigate the threats posed by state-of-the-art AI systems, the AI FORCE challenges will help DARPA and the broader community work together in research that will result in safe, secure, and trustworthy AI systems.

We will publish a series of tasks in 4-week increments. Tasks will involve the detection and attribution of generative AI media including images, audio, and video.

This effort is being funded by DARPA’s Semantic Forensics (SemaFor) program. Additional information about SemaFor is available on DARPA's website.

Challenge 2

Upcoming

AI-Generated Image Model Attribution

Overview

Generative AI technology continues to become easier to use and widely adopted and new models, including fine-tuned models, are appearing at staggering rates. Many image-sharing outlets such as social media sites have seen a significant increase in the amount of AI-generated content being shared. One report from August 2023 references that “since the launch of DALLE-2, people are creating an average of 34 million images per day” (Valyaeva 2023). Though many of these images are benign, threats related to disinformation, privacy, and offensive or illegal content have surfaced. The AI model’s ability to create photorealistic images that mimic photography makes for a large portion of this threat. In addition to the development and adoption of capabilities to differentiate between AI-generated images and real photography, the ability to attribute AI-generated images to specific models represents useful and essential information in ensuring transparency and reliable digital media.

Challenge Description

The objective of this competition is to develop innovative and robust machine learning or deep learning models that can accurately attribute and classify synthetic AI-generated images to the base model that created them. Participants will be challenged to build capabilities that can attribute AI-generated images to known, publicly available models.

Research Questions

  • Is it possible to attribute AI-generated images to the specific models or tools that were used to generate them?
  • With what reliability can different architectures or models be distinguished from each other?

Schedule

  • Registration Opens: May 2024
  • Start Date: June 3, 2024
  • Final Submission Deadline: June 26, 2024

Note: All deadlines are at 11:59 PM EST on the corresponding day unless otherwise noted. The challenge organizers reserve the right to update the contest timeline if they deem it necessary.

Acknowledgments and Citations

Submission Requirements and Instructions

See the AI FORCE Analytic Template on GitLab

All Challenges

Challenge 1

5/6/24 - 6/30/24

AI-Generated Image Detection

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect synthetic AI-generated images. Participants will be challenged to build models that can discern between real images, including ones that may have been manipulated or edited using non-AI methods and fully synthetic AI-generated images.

Registration Open

Challenge 2

6/3/24 - 6/26/24

AI-Generated Image Model Attribution

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately attribute and classify synthetic AI-generated images to a specific base model. Participants will be challenged to build capabilities that can attribute AI-generated images to known available models

Upcoming

Challenge 3

July 2024

Avatar Attribution Synthetic Talking-Head Videos

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect and classify self reenactment vs. cross reenactment or attribute whether the video is being driven by the person shown in the video (target identity) or by someone else (puppeteer).

Upcoming

Challenge 4

August 2024

Detection of AI-Generated Synthetic Audio

The objective of this challenge is to develop innovative and robust machine learning or deep learning models that can accurately detect AI-generated audio from real audio. Participants will be challenged to build models that can discern between real audio or audio that was created by generative AI regardless if that audio is of a real person or not.

Upcoming

This website is not a Department of Defense (DoD) or Defense Advanced Research Projects Agency (DARPA) website and is hosted by a third-party non-government entity. The appearance of the DARPA logo does not constitute endorsement by the DoD/DARPA of non-U.S. government information, products, or services. Although the host may or may not use this site as additional distribution channels for information, the DoD/DARPA does not exercise editorial control over all information you may encounter.