- Published on January 29, 2025
- In AI News
Users can access it by registering for an Alibaba Cloud account and activating the Model Studio service.

Alibaba Cloud has released Qwen2.5-Max, a large-scale Mixture-of-Expert (MoE) language model, and made its API available through their cloud platform. The model outperforms DeepSeek V3, which was released last year and made headlines for its training budget of $5.5 million.
“Today marks the Chinese New Year, and while fireworks light up the sky outside, here I am, sitting in front of my computer, writing this post. We’ve finally released Qwen2.5-Max, an MoE model on par with Deepseek-V3, now available on Qwen Chat and via API,” said Binyuan Hui, a researcher at Alibaba Qwen Team.
The model has been pretrained on 20 trillion tokens and further refined using Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies.
The model’s performance was evaluated against leading proprietary and open-weight models across various benchmarks, including MMLU-Pro, LiveCodeBench, LiveBench, and Arena-Hard. These benchmarks assess knowledge, coding capabilities, general abilities, and human preferences, respectively.
“Qwen2.5-Max outperforms DeepSeek V3 in benchmarks such as Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond, while also demonstrating competitive results in other assessments, including MMLU-Pro,” the company stated in its blog post.
The model’s API, named qwen-max-2025-01-25, is now available on Alibaba Cloud. Users can access it by registering for an Alibaba Cloud account and activating the Model Studio service. The API is compatible with OpenAI’s API, which makes it easy for developers to integrate the model into their applications.
Looking ahead, the Qwen Team aims to further enhance Qwen2.5-Max’s capabilities through advanced post-training techniques.
“We are dedicated to enhancing the thinking and reasoning capabilities of large language models through the innovative application of scaled reinforcement learning. This endeavour holds the promise of enabling our models to transcend human intelligence, unlocking the potential to explore uncharted territories of knowledge and understanding,” the company said.
Alibaba recently launched its latest vision-language model, Qwen2.5-VL, which succeeds Qwen2-VL. This model is built to “understand things visually,” including recognising objects, analysing texts, charts, and graphics within images, and acting as a visual agent capable of directing tools.
One of its key features is that the model can also control mobile and computer screens, similar to Anthropic’s Computer Use and OpenAI’s Operator agent.
Siddharth Jindal
Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Subscribe to The Belamy: Our Weekly Newsletter
Biggest AI stories, delivered to your inbox every week.
February 5 – 7, 2025 | Nimhans Convention Center, Bangalore
Rising 2025 | DE&I in Tech & AI
Mar 20 and 21, 2025 | 📍 J N Tata Auditorium, Bengaluru
Data Engineering Summit 2025
15-16 May, 2025 | 📍 Taj Yeshwantpur, Bengaluru, India
AI Startups Conference.
April 25 /
Hotel Radisson Blu /
Bangalore, India
17-19 September, 2025 | 📍KTPO, Whitefield, Bangalore, India
MachineCon GCC Summit 2025
19-20th June 2025 | Bangalore
Our Discord Community for AI Ecosystem.