How AI is being used to spread misinformation
A new research paper from ShadowDragon examines how AI, such as ChatGPT, is being used to spread hate and misinformation via fake reviews and deepfakes.
Written by Nico Dekens, director of intelligence, collection innovation at ShadowDragon, the paper looks at how to identify AI-generated materials online that are intentionally spreading false information or worse.
Using error messages that are generated when a user makes a prompt that violates the terms of service and/or is something that ChatGPT is not capable of doing, ShadowDragon has been able to find AI-generated fake reviews, social media messages, hate speech, fake blogs and more.
The research also discovered that ChatGPT often lies about certain tasks it is given. In other words, it makes mistakes and then lies about them.
One issue that the paper highlights is the use of ChatGPT to generate fake reviews on shopping sites. Because it has been trained on large datasets of reviews it can create new ones that appear to be written by real people. These fake reviews can be used to manipulate consumer opinion, harm competitors, and deceive customers.
Dekens' personal research as well as conversations with other opens source intelligence (OSINT) investigators has helped formulate specific searches to allow deeper investigation of the misusing of ChatGPT for bad or wrong intentions.
Dekens concludes, "It has been proven that ChatGPT lies, and extra fact-checking and validation is always needed. We can use OSINT tradecraft search techniques to find and expose ChatGPT generated false, fake or offensive content, and we can use that information to pivot into the user accounts and platforms that are hosting and spreading these wrong pieces of AI generated content."
You can get the full research paper, with advice on spotting fakes, from the ShadowDragon blog.
Image credit: Skorzewiak/depositphotos.com