American Researchers: Social Media Platforms Struggle to Differentiate Between Real and Fake.
Determining what is real has become increasingly difficult on social media platforms. If you have spent time on
Facebook in the past six months, you might have noticed surprisingly lifelike images that are hard to believe: children holding paintings that seem like the work of professional artists, or stunning interior designs of wooden cabins that seem to be pulled straight from a fantasy. Additionally, you may have seen other bizarre creations, such as an artificially generated image of the Pope wearing a puffy jacket that went viral in May 2023.
Artificially Generated Images
Images generated by artificial intelligence (AI) are becoming more widespread and popular on social media platforms. Although many verge on surreal, they are often used to lure regular users into engagement.
In this report, Renee DiResta, Director of Research at Stanford Internet Observatory of Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology at Georgetown University; and Josh A. Goldstein, a research fellow at the same center, state: "Our team of researchers from the Stanford Internet Observatory and the Center for Security and Emerging Technology at Georgetown University investigated over 100
Facebook pages posting a large volume of AI-generated content. We published our findings in March 2024 as a preliminary paper, meaning the results have yet to undergo peer review."
**Analyzing Image Patterns**
The researchers explored image patterns, discovered evidence of coordination among some pages, and sought to distinguish the potential targets of the posters. It appears that page managers publish AI-generated images of children, kitchens, or birthday cakes for several reasons.
Discussing the reasons, the researchers pointed out that some content creators maliciously aim to increase their followers using synthetic content; fraudsters use stolen pages from small businesses to advertise products that seem nonexistent; and spammers share AI-generated animal images while directing users to ad-filled websites, allowing owners to collect advertising revenue without producing high-quality content.
The findings suggest that these AI-generated images attract users. It's possible that
Facebook's recommendation algorithm promotes these posts organically.
Generative AI Employed for Fraud
How does generative AI intersect with fraud and the creation of spam? Spam and online fraudsters have been around for more than two decades, using unwanted email to promote fake financial schemes and targeting the elderly by pretending to be medical care representatives or computer technicians. On social media, profiteers used articles to draw users to ad-filled websites to make money.
In the early 2010s, spammers grabbed people's attention with ads promising belly fat loss or new language acquisition with "one weird trick." Now, AI-generated content has become the "new weird trick." It's visually appealing and inexpensive to produce, allowing fraudsters and spammers to create large volumes of engaging posts.
Some pages observed uploaded dozens of unique images daily, following "Meta's" advice for page creators. The company suggests that frequent posting helps content creators gain some form of algorithmic traction, leading to their content appearing in what was formerly known as the "News Feed."
A significant portion of the content remains clickbait: the odd image makes people pause and stare, increasing shares simply because it's unusually engaging. Many users interact by liking the post or leaving a comment.
However, some of the more notorious spammers we noticed may have realized this and optimized their engagement by shifting to posting AI-generated images.
But more traditional creators also benefit from interaction with AI-generated images, without clearly violating platform policies.
Plans to Monitor AI-Produced Content
"Meta" is aware of the potential issues if AI-generated content merges into the information environment without warning. The company announced several plans to address AI-generated content. By May 2024,
Facebook will start applying a "Made with AI" tag to content it can reliably detect as synthetic.
The devil is in the details, though. How accurate are the detection models? What AI-generated content will slip through? What content will be incorrectly tagged? And how will the public react to such labels?
While our work focused on spam and fraud on
Facebook, there are broader implications, including AI-generated video targeting children on YouTube, and influencers on TikTok using generative AI for profit, as previously reported by the press.
Social media platforms must consider how to handle AI-generated content; user engagement could diminish if the internet world is flooded with artificially created posts, images, and videos. Thus, the challenge of assessing what is real... is heating up.