Meta disrupts Russia’s AI-driven misinformation campaigns

Despite deploying AI to create misleading content, Russian efforts have only yielded minor productivity gains, which Meta has successfully disrupted.

According to a Meta security report, Russia’s use of generative AI in online deception campaigns could have been more effective. Meta, the parent company of Facebook and Instagram, reported that while AI-powered tactics offer some productivity and content-generation gains for malicious actors, they have yet to advance these efforts significantly. Despite growing concerns about generative AI being used to manipulate elections, Meta has successfully disrupted such influence operations.

The report highlights that Russia remains a leading source of ‘coordinated inauthentic behaviour’ on social media, particularly since its invasion of Ukraine in 2022. These operations have primarily targeted Ukraine and its allies, with expectations that as the US election nears, Russia-backed campaigns will increasingly attack candidates who support Ukraine. Meta’s approach to detecting these campaigns focuses on account behaviour rather than content alone, as influence operations often span multiple online platforms.

Meta has observed that posts on X are sometimes used to bolster fabricated content. While Meta shares its findings with other internet companies, it notes that X has significantly reduced its content moderation efforts, making it a haven for disinformation. Researchers have also raised concerns about X, now owned by Elon Musk, being a platform for political misinformation. Musk, who supports Donald Trump, has been criticised for using his influence on the platform to spread falsehoods, including sharing an AI-generated deepfake video of Vice President Kamala Harris.