Christian Riess and Sandra Bergmann are working together with secunet AG to develop a tool that automatically and reliably detects AI-generated fake images.
At first glance, the case seems clear: A long crack runs through the conservatory window. Damage: around 3,000 euros. The claimant sent a photo of the evidence by email. As a precaution, an insurance claims handler verifies the image and looks for traces indicating that it was AI-generated. If that is the case, it is likely a blatant case of insurance fraud. “It might sound like science fiction, but image verification is already used by insurance companies,” says Christian Riess. “A few years ago, we started a collaboration with Nürnberger Versicherung and developed such a program, which we are continuously refining.” Unfortunately, the general issue at stake is that developers are always one step behind, because almost every month new image generators come onto the market that they need to respond to.
Fakes are difficult to detect
Riess is head of the Multimedia Security research group at the Chair of IT Security Infrastructures at FAU. He is one of the top experts in Germany when it comes to image forensics, that is, examining images that have been manipulated for criminal purposes. As a doctoral candidate in Erlangen, he already researched new technologies to improve banknote fraud detection. Christian Riess has been pursuing image manipulation for many years. In the vast majority of cases, there is no criminal intent behind using AI to generate images: They are created to illustrate journalistic and scientific articles, but even more often to entertain the social media community. “Photos and videos with manipulated content are spreading rapidly, and they look increasingly authentic,” says Sandra Bergmann, a doctoral candidate in the Riess group. “Often, fakes are no longer recognizable as such.” Images of the Pope as a DJ might be harmless but it is far more problematic when politicians or celebrities are placed in compromising contexts. In a project launched in 2024, Bergmann and Riess are working on a solution to this problem: In conjunction with secunet Security Networks AG, the IT specialists in Erlangen are developing a universal prototype that should reliably detect deepfakes created by various AI generators. The project is funded by SPRIN-D, an initiative of the Federal Ministry of Education and Research which is an incubator for breakthrough innovations. SPRIN-D is funding the project with 725,000 euros.

Telltale traces
The tool will recognize characteristic signatures of AI image processing. “Most generators currently on the market use diffusion models,” explains Bergmann. “They gradually transform random noise into realistic-looking images after learning from large amounts of data what certain objects and scenes look like.” With text-to-image generators like Stable Diffusion, this process is guided by text input; a new image is created based on this prompt. In the process, generators leave telltale traces in the image frequencies, which can be made visible in two-dimensional spectrograms. However, not all image generators work according to this principle, and it is also impossible to predict what image manipulation technologies will be developed in the coming years. That is why the detection tool the FAU researchers have developed is trained on vast amounts of real and AI-generated photos. Large pre-trained neural networks are also used to extract relevant image features. The goal of the FAU researchers is to combine as many detectors and data traces as possible for a robust prototype. Performance and reliability are the most important aspects of the project. Further, the program is designed so that it can be easily integrated into existing IT infrastructures. When everything is up and running, fake DJs could be quickly unmasked – as could fake cracks in conservatory windows.
AI image generators create images from prompts
AI image generators can establish connections between a text description – the prompt – and an image. AI programs are first trained with a vast amount of sample images from databases or the internet and learn to recognize shapes, colors, and patterns. When the prompt is sent, the AI begins to generate a new image from a random set of data, known as noise. To gradually improve the generated images, the tool uses certain feedback mechanisms, such as “Generative Adversarial Networks,” or GANs for short. Here, two neural networks work together – the generator and the discriminator. The discriminator assesses whether the generated images look real or artificial and helps the generator to create increasingly realistic images.
Mathias Münch

This article is part of the FAU Magazine
The third issue of the FAU Magazine #People is once again all about the people who make our FAU one of the best universities in the world. The examples in this issue show how lively and diverse our research is, the commitment of our students, and the work in the scientific support areas.
Highlight is certainly the new research cluster “Transforming Human Rights.” Or you can follow our scientists into laboratories and workshops, where they make potatoes climate-resistant, teach robots social behavior, or reconstruct ancient ships and cannons. At FAU, students are developing vertical take-off aircraft or impressing with outstanding performances at the Paralympics. And let’s not forget the people who work at our university or remain closely connected as FAU alumni. Visit the Children’s University with them or watch a TV series with an FAU alumna and Grimme Award winner.
- File Name
- FAU_Magazine_2025_2026
- File Size
- 3 MB
- File Type
