OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns

OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns

OpenAI announced on Friday that it had banned a series of accounts that were found using its ChatGPT tool to create a suspected AI-powered surveillance system.

This surveillance tool is believed to originate from China and is powered by one of Meta’s Llama models. The accounts in question had been using OpenAI’s models to generate detailed descriptions and analyze documents, creating a system capable of collecting real-time data and generating reports on anti-China protests in Western nations, which were then shared with Chinese authorities.

The operation has been dubbed Peer Review due to the network’s actions in promoting and reviewing surveillance tools. Researchers Ben Nimmo, Albert Zhang, Matthew Richard, and Nathaniel Hartley noted that this tool is designed to ingest and assess content from platforms like X (formerly Twitter), Facebook, YouTube, Instagram, Telegram, and Reddit.

In one instance flagged by OpenAI, the perpetrators used ChatGPT to debug and modify the source code that runs the surveillance software, known as “Qianyue Overseas Public Opinion AI Assistant.”

In addition to using its model as a research tool to gather publicly available data on think tanks in the United States and officials and politicians in countries like the United States, Australia, and Cambodia, this network also exploited ChatGPT’s abilities to read, translate, and analyze screenshots of English-language documents.

Some of these images contained announcements about Uyghur rights protests in various Western cities, potentially sourced from social media. It remains unclear whether these images were genuine.

OpenAI also reported disrupting several other groups involved in malicious activities using its tool:

Deceptive Employment Scheme: A North Korean network linked to a fraudulent IT worker scheme. This operation involved creating fake documents for non-existent job applicants, including resumes, cover letters, and job profiles, and crafting convincing explanations for unusual behavior like avoiding video calls or working odd hours. Some of these fake applications were shared on LinkedIn.

Sponsored Discontent: A likely Chinese-origin network involved in creating anti-U.S. social media content in English, and long-form articles in Spanish criticizing the U.S., which were later published by news outlets in Peru, Mexico, and Ecuador. Some of this activity overlaps with an existing cluster of operations known as Spamouflage.

Romance-baiting Scam: A group of accounts involved in translating and generating comments in Japanese, Chinese, and English for social media platforms such as Facebook, X, and Instagram, in connection with suspected romance and investment scams originating from Cambodia.

Iranian Influence Nexus: A network of five accounts that generated posts and articles on X that were pro-Palestinian, pro-Hamas, pro-Iran, and anti-Israel and anti-U.S., later shared on websites associated with Iranian influence operations tracked as International Union of Virtual Media (IUVM) and Storm-2035. One banned account was used to create content for both operations, suggesting a previously unreported connection.

Kimsuky and BlueNoroff: A network of North Korean accounts gathering information about cyber intrusion tools and cryptocurrency topics, and debugging code used in Remote Desktop Protocol (RDP) brute-force attacks.

Youth Initiative Covert Influence Operation: A network of accounts creating English-language articles for a website named “Empowering Ghana” and social media comments related to the Ghana presidential election.

Task Scam: A network likely originating from Cambodia, involved in translating comments between Urdu and English as part of a scam that tricks people into completing simple tasks (such as liking videos or writing reviews) in exchange for non-existent commissions. Victims are required to pay money to access these tasks.

These actions highlight the increasing use of AI tools by malicious actors to facilitate cyber-enabled disinformation campaigns and other harmful operations.

Last month, Google’s Threat Intelligence Group (GTIG) revealed that more than 57 distinct threat actors from China, Iran, North Korea, and Russia had been utilizing the Gemini AI chatbot to enhance various stages of the attack cycle. These groups used it for research, content creation, translation, and localization.

OpenAI stated, “The unique insights that AI companies can glean from threat actors are especially valuable when shared with upstream providers, such as hosting services and software developers, downstream platforms like social media companies, and open-source researchers.”

“Likewise, insights from upstream and downstream providers, as well as researchers, provide AI companies with new ways to detect and enforce actions against these threat actors.”

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *