Meta’s methods for detecting deepfakes are “not robust or comprehensive enough” to handle how quickly disinformation spreads during armed conflicts such as the Iran war. That’s according to the Meta Oversight Board — a semi-autonomous body that guides the company’s content moderation practices — which is now demanding Meta change how it levels and labels AI-generated content on Facebook, Instagram and Threads.
The call for action stems from an investigation into a fake AI video of alleged damage to buildings in Israel that was shared on Meta’s social platforms last year, but the board says its recommendations are particularly relevant now, given the “massive military escalation” across the Middle East this week. In its announcement, the board says access to accurate, reliable information is critical to protecting people amid the growing threat of AI tools being used to spread misinformation.
“The Board’s findings highlight that Meta’s current system for accurately labeling AI content relies too heavily on the use of AI and self-disclosure of enhanced review and does not meet the realities of today’s online environment,” said the Meta Oversight Board. “This case also highlights the challenges of cross-platform dissemination of such content, in which content appears to have originated on TikTok before appearing on Facebook, Instagram and X.”
Recommended actions released by the board include pushing Meta to improve its existing rules on misinformation to address misleading deepfakes, and establishing a new, separate community standard for AI-generated content. Meta is also being asked to develop better AI detection tools, be transparent about penalties for AI policy violations, and scale AI content labeling efforts. The latter includes ensuring that artificial images and videos include “high-risk AI” labels more often, and improving the adoption of C2PA (otherwise known as content credentials) so that information on AI-generated content is “clearly visible and accessible to users.”
