How it works

How it works

Step One

Add a social media post with a video, audio, image

Or upload or your own files

Step Two

Quickly get aggregated results from dozens of AI detectors

"TrueMedia.org is a timely and much needed solution to this problem.”
Charles Salter, President & CEO of the News Literacy Project
“This should be built into every browser and social media site.”
Cliff Steinhauer, Director of National Cybersecurity Alliance

Our cause

The cost of creating and distributing deepfakes has plunged sharply during one of the most important political elections in history.

AI-based forgery will grow explosively due to the increased availability of generative AI and associated tools that facilitate manipulating and forging video, audio, images, and text.

Deepfakes

"Today is today" AI generated speech by Vice President Harris.
More
Appeared in
Ukrainian president's face was swapped onto the body of Russian prisoner following Moscow terrorist attack.
More
Appeared in
Swift is falsely portrayed holding a plate of "Vote Trump" cookies
More
Appeared in
AI used to fake an image of Trump's bodyguards smiling after he was shot.
More
Appeared in
Fake video of Will Smith on Biden's broken promises with the African American community.
More
Appeared in
Karine Jean-Pierre's briefing starts out real, but audio over Biden images is fake.
More
Appeared in
AI created news broadcast out of India
More
Appeared in
Biden voice clone urging voters not to turnout during primary election
More
Appeared in
Fake video of US diplomat on Ukraine War
More
Appeared in
A deepfake of Manhattan District Attorney Alvin Bragg clearing Trump of all charges and resigning
More
Appeared in
A generated image posted during the Hamas attack
More
Appeared in
The Pope sports a designer makeover courtesy of Midjourney
More
Appeared in
Manipulated Mother's Day family photo heightens health speculation
More
Appeared in
Indian Prime Minister Narendra Modi voice cloned into 30 regional languages
More
Appeared in
False arrest photos over indictment about alleged hush money payments
More
Appeared in
Fake video of Ukraine’s top security official claiming responsibility for the Crocus Center terrorist attack broadcasted by a Russian TV channel
More
Appeared in
Altered video showing Taylor Swift displaying flag saying 'Trump won'
More
Appeared in
A deepfake of the Kennedy family endorsing Joe Biden.
More
Appeared in
Deepfake army marching in formation with Palestinian keffiyeh scarf
More
Appeared in
Deepfake video impersonating the late Indian politician H Vasanthakumar
More
Appeared in

We integrate AI technology from partners, academia, and our own research & development.

Check for deepfakes directly on X

Submit media by tagging @truemediabot and we’ll reply with an analysis

Try a demo

Test video/audio image now

See analysis

FAQ

What is a “deepfake”?
A deepfake is a video, photo, or audio recording that seems real but has been generated or manipulated with AI. The underlying technology can replace faces, manipulate facial expressions, synthesize faces, and speech. Deepfakes can depict someone appearing to say or do something that they never said or did.
Do we share our database of deepfakes?
No, we do not share our database for two reasons: 1. We want to prevent leaks to bad actors. 2. We want to avoid compromising the accuracy of our evaluations (if our evaluation set is used to train other detectors, it will eliminate our ability to measure their quality.)
What file formats can you analyze?
Our technology can analyze suspicious media across audio, images, and videos. The max file size is 100MB. You can also add your own file in the following formats:
- Video: mp4, webm, avi, mkv, wmv, mov
- Image: gif, jpg, png
- Audio: mp3, wav
What social media platforms can you analyze?
You can copy and paste links directly from social media posts to analyze the images, audio, and video on those posts.

Accepted direct links: TikTok, X, Mastodon, YouTube, Reddit, Facebook, Instagram, and Google Drive.
How do you detect if media is a deepfake?
We use a set of different AI detectors, some we developed in-house, but most from technical partners. We partner with the best detection companies in the world to bring all their cutting-edge technologies to our users. Our detectors look at different aspects, which are broken down into four categories:

1. Face Manipulation - Distinguishes deepfakes from real faces or if other methods were used such as face blending, swaps, or re-enactment.

2. Generated AIDetects if the image was created with popular tools, specifically Dall-E, Stable Cascade, Stable Diffusion XL, CQD Diffusion, Kadinsky, Wuerstchen, Titan, Midjourney, Adobe Firefly, Pixart, Glide, Imagen, Bing Image Creator, LCM, Hive, Deepfloyd, and any Generative Adversarial Network (GAN.)

3. Visual noise - Detects if artifacts from manipulation or generation are present in an image, including variation in pixels and color variation. When an AI tool creates or modifies an image sometimes types of visual noise remain.

4. Audio - Detects if there are traces that audio has been manipulated or cloned.
How do you analyze edited videos?
Heavily edited videos with many quick cuts can trigger a suspicious verdict from our detectors which look for discontinuities. Short-form, multi-clip videos like those on TikTok, Instagram Reels, or YouTube Shorts can make analysis less reliable.
Can you test non-political deepfake? Specifically, can you help with all the “nudefakes” in schools?
No. We understand that there is an important need for tools that will protect children from the dangers of deepfakes. It’s against our user terms of service to add nude or child images to our tool. Our focus is on the political landscape and elections.
Are you English-only?
No, our tool works in all languages. We can analyze non-English audio with varying results.
Who is funding TrueMedia.org?
TrueMedia.org is a non-partisan, non-profit organization. The lead financial backer is Camp.org, and we accept contributions from individuals and other organizations. Consider a direct donation.

Donate to the cause