China just released a one trillion parameter AI model called Yuan 3.0 Ultra. Built with a Mixture-of-Experts architecture, it actually became faster and more efficient after removing roughly thirty three percent of its own parameters during training, boosting efficiency by about forty nine percent. The result is a trillion parameter system competing with models like GPT 5.2, Gemini 3.1 Pro, Claude Opus 4.6, DeepSeek V3, and Kimi K2.5 across reasoning, coding, retrieval, and enterprise AI tasks.
