Russia, China and Israel are being removed from influence operations by Openai


OpenAI disrupts covert influence operations: Russia, Russia, and China are trying to exploit generative artificial intelligence to improve the political discourse in the US and China

OpenAI released a threat report today detailing how foreign influence operations from Iran, Russia, and China have attempted to use its technology. The report named five different networks that were shut down. Russia’s Doppleganger and China’sSpamoflauge are experimenting with how to use generative artificial intelligence to automate their operations, according to a report by OpenAI. They’re also not very good at it.

“You can generate the content, but if you don’t have the distribution systems to land it in front of people in a way that seems credible, then you’re going to struggle getting it across,” Nimmo said. “And really what we’re seeing here is that dynamic playing out.”

He said that while AI does offer threat actors some benefits, including boosting the volume of what they can produce and improving translations across languages, it doesn’t help them overcome the main challenge of distribution.

None of the operations OpenAI disrupted only used AI-generated content. “This wasn’t a case of giving up on human generation and shifting to AI, but of mixing the two,” Nimmo said.

In the past three months, OpenAI banned accounts linked to five covert influence operations, which it defines as “attempt[s] to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.”

A previously unreported Russian network was banned because it used Telegram as a platform to promote its products. It used OpenAI tools to debug code for a program that automatically posted on Telegram, and used AI to generate the comments its accounts posted on the app. The operation was meant to undermine support for Ukraine, through posts that weighed in on politics in the US and Moldova.

The accounts used artificial intelligence to develop a website for Chinese dissidents, to analyze social media posts, and to research current events. Some posts from fake accounts were only heard from other accounts in the network.

Both Spamouflage and Doppelganger used Openai to make comments in multiple languages, which were posted on social media sites. The Russian network uses artificial intelligence to translate articles into English and French, as well as turning website articles into Facebook posts.

That includes two operations well known to social media companies and researchers: Russia’s Doppelganger and a sprawling Chinese network dubbed Spamouflage.

The global electorate must contend with this new tech. Deepfakes can be used for many different things from sabotage to satire to the seemingly mundane. Artificial intelligence has used to make world leaders look like they promote the joys of passive-income scam. AI has been used to deploy bots and even tailor automated texts to voters.

In her research, the network would use real-seeming Facebook profiles to post articles, often around divisive political topics. “The actual articles are written by generative AI,” she says. What they are trying to do is to see what will fly, what Meta’s system will not be able to catch, and so on.

The report says that the AI-generated content didn’t break out of the influence networks themselves into the mainstream when it was shared on popular platforms. This was the case with campaigns for which an Israeli company seemed to work on a for-hire basis and posted content that ranged from anti-India to anti-Iran and even anti-Qatar.

The first report from OpenAI, which became one of the leading players in artificial intelligence, has gained 100 million users.

Bad actors have used OpenAI’s tools, which include ChatGPT, to generate social media comments in multiple languages, make up names and bios for fake accounts, create cartoons and other images, and debug code.

Online influence operations based in Russia, China, Iran, and Israel are using artificial intelligence in their efforts to manipulate the public, according to a new report from OpenAI.

And while it’s a modest relief that these actors haven’t mastered generative AI to become unstoppable forces for disinformation, it’s clear that they’re experimenting, and that alone should be worrying.

Social Media Attacks of the Chinese Diaspora and the Establishment of the Mao: The Spamoflauge Report on Various Counterattack Campaigns

In other cases, it was used to create code and content for websites. For instance,Spamoflauge used it to create a site with stories attacking members of the Chinese Diaspora who were critical of the country’s government.

Taken altogether, the report paints a picture of several relatively ineffective campaigns with crude propaganda, seemingly allaying fears that many experts have had about the potential for this new technology to spread mis- and disinformation, particularly during a crucial election year.