The recent chess tournament between AI models from OpenAI and xAI, organized by Google, showcased the differences in approaches between two major players in the field of artificial intelligence.
One-Sided Final
The final held on August 7 was a triumph for OpenAI’s o3 model, which decisively defeated xAI’s Grok 4 with a score of 4–0. The tournament, known as the “Kaggle Game Arena AI Chess Exhibition,” prohibited the use of a chess engine or specialized training, allowing the models to operate based on general knowledge gathered from the internet. From the first games, the differences were apparent: world champion Magnus Carlsen noted that the level of AI at the tournament was approximately 800 ELO, significantly below competitive standards.
Limits of General AIs
The tournament revealed the structural difficulties faced by general AIs when confronted with strict frameworks like chess. Many models were disqualified in the preliminary phase for attempting impossible actions such as teleporting pieces or illegal moves. Even in the final, understanding of the rules seemed fragile, alternating between brilliant moves and absurd decisions. As Carlsen pointed out, “these AIs know how to count captured pieces, but not how to conclude a winning game.”
Conclusion of the Tournament
For Elon Musk, this defeat marked his second direct competitive loss this year, while xAI had just raised $10 billion and sought to position itself as a credible player in the race for general AI. However, the Google exhibition reminded the entire sector that current large models excel in natural language processing, but much less in strict application of complex rules. AI may one day rival the best chess players, but for that, it will also need to demonstrate capabilities beyond the black and white squares.
The tournament served as an important milestone in understanding the current capabilities of artificial intelligence, demonstrating that even the most advanced language models are still far from mastering strict rule-based tasks.