Has it become mission impossible to distinguish the truth from fake news in the sea of content published every day? Aude Favre, journalist, specialist in fake news and conspiracy theories, and creator of the YouTube channel WTFake has taken up the challenge. She shares her thoughts and tools to detect AI-generated content designed to manipulate the truth.
journalist and content creator
How do you view automated AI-generated content?
This practice raises many questions within the profession. Is it acceptable for articles written by journalists to coexist with AI-generated text and photos? Or will AI eventually replace journalists? Personally, I don't feel threatened because AI is not capable of doing what I do today: calling a source, cross-checking information, or verifying the facts. I feel the industry is not being undermined by AI, simply because investigative journalism requires humans. This emerging technology will revolutionize our industry in the long term, but perhaps in a positive way, by helping journalists focus on their core business, investigation.
How do you tackle AI-based deepfake content?
Manipulation is becoming more professional, and the techniques used for deception and fake news far outnumber those available to journalists. The need for media literacy has become more urgent.
This is a huge challenge because manipulation is becoming more professional, and the techniques used for deception and fake news far outnumber those available to journalists to counter them. What we can achieve is a drop in the ocean of misinformation. Nevertheless, we must do something. Everyone can come together to fight those spreading misinformation, because yes, this is a war!
From my point of view, the need for media literacy has become more urgent and should be integrated into the national curriculum at scale, even if a full deployment is complex. The number of middle school students who leave without having had a minute of training on this subject is far too large, especially since they are so exposed to fake rumors and covert advertising. It’s irresponsible to deprive them of the tools to decipher this on their own, although I think they are less easy to fool than older audiences.
How can journalists verify that content is authentic?
The core of journalism: research whether the source who posted the content first is trustworthy.
The core of journalistic work is to research whether the source who posted the content first is trustworthy. If we are seeing content that has never been published before or authenticated by a journalist, then we must legitimately question whether it’s true. The good news is information can, in most cases, be verified using simple tools that are accessible to everyone. It has been more complex to detect deepfakes until recently because there were no reliable tools to do so. However, this can change very quickly, as progress is extremely rapid.
Is it possible to combat fake content or are we fighting a losing battle?
All is not lost yet, but it will take a collective effort, from public authorities to journalists, to raise as much awareness as possible. I can see that many people don’t want to live in a world intoxicated by false or misleading content, which was the motivation behind creating a large citizen-powered newsroom where anyone can investigate with me via Discord. The content formed a documentary series broadcast on arte.tv called Citizen Facts. My goal is to bring together as many people as possible, in France and abroad, who refuse to be manipulated and will therefore work as a network and share tools to detect fake news and develop critical thinking.
With AI becoming more widespread, how can citizens ensure they’re getting secure and reliable information?
I have my own idea: by trusting journalists who are information professionals, in the same way that if someone wants to repair their car, they should go to a good mechanic. Above all, journalists are there to enlighten, investigate, and inform as best as possible. That’s not to say they don’t make mistakes, but I see them at work, and I can tell you that they’re working hard to produce quality information.
Developed by Aude Favre, the STAR method makes it easy for her and members of her citizen editorial team to check the information in front of them is safe and reliable.
To do this, they need to tick four boxes:
S is for source. Who are the fact’s sources and are they trustworthy?
T is for travail (work). How was the survey conducted? Which specialists have been contacted and can testify to the truth?
A is for author. Does the author of the content know the topic and are they credible?
R is for rigor. Is the writing accurate and are the terms used correct?
InVID WeVerify: By detecting image manipulation (filters, edits, etc.), this tool allows you to check if a video or photo is authentic.
Reverse image search on Google: This is the basic tool for journalists. All you have to do is upload a photo and Google will search where it first occurs on the web to trace its original publication.
Tineye: This site highlights the differences between two images so you can see any modifications made to the original such as retouching, cropping, etc.
Originality.ai: This authenticity control tool tells you if a text has been generated by an AI. Each scanned text is assigned a score. 100/100 means it was written by a human.