OpenAI announced it will not integrate its powerful deep research model into its API yet, aiming to explore and mitigate AI risks, particularly manipulation in beliefs.
AI Manipulation Risks: OpenAI's Stance
OpenAI restrains its deep research model from API integration, emphasizing on analyzing AI persuasion risks. The company outlined in a whitepaper the necessity to refine methods to assess the likelihood of AI model misusage in altering beliefs in real-world scenarios. The primary goal is to curb large-scale misinformation spread.
Misinformation Threat: Real-World Examples
The fears about AI's misinformation potential are backed by real-life examples. Recently, during Taiwan's election, AI-generated audio was distributed to misinform voters about candidates' positions. Furthermore, criminals employ AI for social engineering attacks, creating fake celebrities to deceive consumers and corporations.
Performance and Persuasion Tests of the Deep Research Model
OpenAI's whitepaper provides results of persuasiveness tests on the deep research model, known as the o3 'reasoning' model. It outperforms OpenAI's other models in writing persuasive arguments but does not exceed human baseline. While it excels in MakeMePay tasks, it falls short in others compared to GPT-4o, indicating room for further development.
OpenAI's decision to hold back the integration of its deep research model into the API shows a cautious approach towards AI development, highlighting the importance of safety and ethics as AI models grow more capable.