OpenAI announced the blocking of several accounts linked to Iran for spreading fake news about the US elections and other politically significant events.
Detection and Blocking
OpenAI recently stated that it recognized and blocked several accounts belonging to an Iranian operation called 'Storm-2035.' These accounts were using ChatGPT to produce and disseminate fake information about the US election, the Gaza conflict, and Israel’s Olympic involvement.
Methods and Scale of Operations
OpenAI's research revealed that these accounts were authoring long-form articles as well as short social media posts; however, most of the content generated minimal engagement. The operation was evaluated as a low-level threat according to the Brookings Breakout Scale, which estimates the outcomes of clandestine influence operations.
Response and Consequences
OpenAI found that the operation targeted both liberal and Republican voters. Some accounts even copied comments of real users in an attempt to make their activities look more genuine. Still, the effectiveness of the misinformation campaign was not very significant. The timing of OpenAI's announcement is particularly curious as it occurred just a week after Trump’s campaign claimed to have been attacked by Iranian hackers.
OpenAI's decisive action against Iran-linked accounts underscores its commitment to safeguarding information integrity and combating foreign influence in critical political processes.
Comments