The rapid development of digital technologies in recent years has opened up both fascinating and unsettling possibilities, particularly in the area of synthetic content and deepfakes. This sophisticated AI-generated audio, image and video content is now so realistic that it is almost indistinguishable from real footage.
With the exponential increase in the performance of systems and language models based on artificial intelligence (AI) and machine learning (ML), not only the quantity but the quality of deepfakes is improving rapidly. This poses dangers – for trust, security and the perception of reality in our society.
Efforts to combat deepfakes are currently focused on two major areas:
00:00
The prototype must demonstrate how deepfakes images can be reliably detected and authenticated. It can be AI-supported and should be able to continuously adapt to new deepfake techniques. At least three different use cases (e.g. social media, news portals, video conferencing systems) will be demonstrated by the end of the process. Scalability and adaptability to different digital platforms are essential.
The SPRIND Funke runs over a period of 13 months. At the end of October 2024, our expert jury selected twelve teams as participants for the first stage of the Funke.
SPRIND provides intensive and individualized support, which includes funding of up to €350,000 for each team in the first phase of the Funke. After seven months, the jury reconvenes to evaluate the progress and decide which approaches have the greatest potential for breakthrough innovation. The selected teams will then have the opportunity to prove themselves in a second phase of the Funke, which provides up to €375,000 per team in additional funding.
Multi-modal Deepfake Detection
ImVerif
FAU/secunet-solution
ReaLGuard
Content Transparency Archive (CTA): Verifiable Metadata
AI Robotic (VeriDeep)
Neuraforge
Valid - Trusted Information
Cinematic Context Aware AI Image Detection
DeepShield - The Disruptive Preventer
DeepFOCAS: DeepFake detection using Observable, Contextual, Accessible, and Semantic information
ClyraVision
Do you have further questions? Please feel free to contact us at challenge@sprind.org