Hundreds of civil society groups have called on the leaders of major technology companies to intensify their battle against misinformation driven by artificial intelligence.
Fighting Fake News
More than 200 civil rights advocacy groups are urging leading tech firms to step up their efforts to counteract AI-fueled misinformation as billions of voters head to the polls this year for elections around the world.
In a letter sent to the CEOs of Meta, Reddit, Google, and X, along with eight other tech executives on Tuesday, an activist coalition pressed them to adopt stronger policies to counter the dangerous wave of political propaganda.
Demand for Deepfake Bans
These additional measures are deemed crucial, particularly in 2024, as over 60 countries are set to conduct national elections, according to the letter, which was highlighted by technology analyst Naomi Nix at The Technology 202.
Marking AI-Generated Posts
Nora Benavidez, Senior Counsel at the digital rights group Free Press, noted, “A significant number of elections are happening around the world this year, and social media platforms are among the primary ways people usually engage with information.” Therefore, she emphasizes, companies need to “increase platform safety measures at this moment.”
The groups have also called for tech giants to strengthen their policies on political advertisements, including banning deepfakes and marking any content that is generated by artificial intelligence. For months, civil rights advocates have been warning that the increasing number of AI-generated audio clips and videos is already contributing to election confusion worldwide.
Experts have pointed out that the dangers of AI could inflict real harm on politically volatile democracies.
Watermarking Initiatives
Tech companies like Meta, Google, and Midjourney insist they are developing systems to identify AI-generated content using watermarks. Just last week, Meta announced that it would extend its AI labeling policy to encompass a wider range of video, audio, and images.
However, experts believe it is unlikely that tech companies will detect all AI-produced misleading content on their networks or address the fundamental algorithms that allow some of these posts to spread widely in the first place.
The groups have urged tech companies to be more transparent about the data supporting their AI models and criticized them for weakening policies and systems aimed at countering political misinformation over the past two years.
Risks of Harmful Propaganda
The groups warn that if tech companies do not bolster their efforts, harmful propaganda on social media could lead to extremism or political violence. Frances Haugen, a former
Facebook whistleblower whose group Beyond the Screen signed the letter, commented, “It’s not outside the realm of possibility that we will see more misinformation masquerading as deepfakes.” She added that countries with more fragile democracies than the United States are equally susceptible to these manipulations.