China is making significant strides in artificial intelligence with the development of its first AI models based on the Mixture-of-Experts (MoE) architecture. This advancement, spearheaded by China Telecom, showcases the country's growing capabilities in AI technology, particularly through the use of advanced chips from Huawei Technologies. The publication provides the following information:
Introduction of TeleChat3 Models
The newly introduced TeleChat3 models, which boast a staggering range of parameters from 105 billion to trillions, were trained using Huawei's Ascend 910B chips alongside the MindSpore deep learning framework. A technical paper released last month highlights how the Huawei technology effectively addressed the challenges associated with training large-scale MoE models, overcoming significant bottlenecks in the process.
Performance Comparison with OpenAI's GPT-3
Despite these advancements, researchers have noted that the performance of the TeleChat3 models does not yet match that of OpenAI's GPT-3 120B. This comparison underscores the competitive landscape of AI development, where performance benchmarks continue to evolve rapidly.
Implications for the Tech Industry
As China Telecom pushes forward with its AI initiatives, the implications for the broader tech industry and AI research are substantial.
DeepSeek, a revolutionary AI model from China, has emerged as a significant player in the global tech landscape, contrasting with the advancements made by China Telecom in AI technology. For more details, visit read more.








