29 Apr, 2025
2 min time to read

Chinese tech giant Alibaba has announced a new family of AI models — Qwen3 — which, according to the company, can compete directly with top systems from Google and OpenAI.

The Qwen3 models range from 0.6 billion to 235 billion parameters and are available under an open license on Hugging Face and GitHub. Alibaba describes them as "hybrid" models, capable of either quickly answering simple queries or allocating more resources for complex reasoning — similar to OpenAI’s latest systems.

Some models use a Mixture of Experts (MoE) architecture, allowing tasks to be split into subtasks handled by specialized "experts" to improve efficiency.

Qwen3 models support 119 languages and were trained on an extensive dataset of nearly 36 trillion tokens, including textbooks, code, dialogues, and AI-generated content. Compared to the previous Qwen2 generation, the new models show notable improvement: the flagship Qwen-3-235B-A22B outperformed OpenAI's o3-mini and Google’s Gemini 2.5 Pro on platforms like Codeforces, and led tests like AIME and BFCL in mathematical and logical reasoning tasks.

However, the Qwen-3-235B-A22B model is not yet publicly available. The largest open model — Qwen3-32B — also delivers competitive results, outperforming OpenAI’s o1 model in coding benchmarks.

Alibaba emphasizes that Qwen3 offers stronger tool-use capabilities, better instruction following, and more advanced structured data generation. The models are already integrated into cloud services like Fireworks AI and Hyperbolic.

Experts note that the development of models like Qwen3 highlights growing competition amid U.S. export restrictions on AI chips to China — and shows that Chinese AI is rapidly closing the gap with closed-source Western systems.