Unlock the Power of AI

1 million free tokens

88% Price Reduction

Activate Now
Tongyi Qianwen (Qwen)

Top-performance foundation models from Alibaba Cloud

About Qwen

Alibaba Cloud provides Tongyi Qianwen (Qwen) model series to the open-source community. This series includes Qwen, the large language model (LLM); Qwen-VL, the large language vision model; Qwen-Audio, the large language audio model; Qwen-Coder, the coding model; Qwen-Math, the mathematical model; and QwQ-32B, the reasoning model. You can try Qwen models and easily customize and deploy them in Alibaba Cloud Model Studio.

The latest Qwen 2.5 models are pre-trained on our latest large-scale dataset, which includes up to 18 trillion tokens. Compared to Qwen2, Qwen2.5 has acquired significantly more knowledge (MMLU: 85+) and has greatly improved capabilities in coding (HumanEval 85+) and mathematics (MATH 80+). Additionally, the new models have significantly improved in following instructions, generating long texts, understanding structured data, and generating structured outputs. Qwen2.5 models are generally more resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. Qwen2.5-Max, the large-scale MoE model, has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies. Qwen2.5-Coder has been trained on 5.5 trillion tokens of code-related data, delivering competitive performance against larger language models on coding evaluation benchmarks. Qwen2.5-Math supports both Chinese and English and incorporates various reasoning methods, including Chain-of-Thought (CoT), Program-of-Thought (PoT), and Tool-Integrated Reasoning (TIR). QwQ-32B leverages reinforcement learning to excel in complex problem-solving tasks like mathematical reasoning and coding, achieving performance comparable to larger models.

Leading Performance in Multiple Dimensions

Qwen outperforms other open-source baseline models of similar sizes on a series of benchmark datasets that evaluate natural language understanding, mathematical problem-solving, coding, etc.

Easy and Low-Cost Customization

You can deploy Qwen models with a few clicks in PAI-EAS, and fine-tune them with your data stored on Alibaba Cloud, or external sources, to perform industry or enterprise-specific tasks.

Applications for Generative AI Era

You can leverage Qwen APIs to build generative AI applications for a broad range of scenarios such as writing, image generation, audio analysis, etc. to improve work efficiency in your organization and transform customer experience.

Qwen Model Family

icon

Qwen

Our latest Qwen 2.5 models have been pre-trained with high-quality data from a wide range of domains and languages, supporting a context length of up to 128K tokens. These models offer enhanced performance in coding, mathematics, human preference, and other core capabilities such as following instructions and understanding or generating structured data.

Qwen2.5-Max

Qwen2.5-Max is a large-scale Mixture-of-Expert (MoE) model that has been pretrained on over 20 trillion tokens. It demonstrated leading performance in benchmarks such as Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond, compared to models such as DeepSeek V3 and Llama 3.1.

Qwen2.5-Coder

Qwen2.5-Coder is an open-source coding model. It supports up to 128K tokens of context, covers 92 programming languages, and has achieved remarkable improvements across various code-related evaluation tasks, including code generation, multi-programming code generation, code completion, and code repair.

Qwen2.5-Math

Qwen2.5-Math is our mathematical LLM pre-trained and fine-tuned with synthesized mathematical data. It supports bilingual queries in English and Chinese and excels in Chain-of-Thought (CoT) and Tool-Integrated Reasoning (TIR). Qwen2.5-Math outperforms most 70B math models in various tasks.

Qwen-VL

Qwen-VL is the large vision language model of the Qwen series. It generates content based on images, text, and bounding boxes as input. With leading performance verified by multiple evaluation benchmarks, Qwen-VL can perform fine-grained text recognition in both Chinese and English, compare and analyze these images, then create stories, solve math problems, or answer questions.

Qwen-Audio

Qwen-Audio is the large audio language model of the Qwen series. Qwen-Audio accepts text and diverse audio files (human speech, natural sound, music, and songs) as inputs, and provides text-based output. Qwen-Audio achieves impressive performance without any task-specific fine-tuning on the test set of Aishell1, cochlscene, ClothoAQA, and VocalSound.

QwQ-32B

QwQ-32B scales Reinforcement Learning (RL) to enhance performance and integrates agent capabilities for critical thinking and adaptive reasoning. With only 32 billion parameters, it matches the performance of DeepSeek-R1 (671B parameters). QwQ-32B is open-weight in Hugging Face and ModelScope under the Apache 2.0 license.

Customer Success Stories
Xin Zhong, IT head of AstraZeneca China
"Working closely with Alibaba Cloud, we managed to harness the benefits of the Qwen LLM and Dedicated Model Studio and vastly improved the efficiency of generating adverse event reports from huge amounts of medical literature. We’re proud that we have pioneered this innovation in the industry. We expect to explore more AI-based innovations with Alibaba Cloud."
Shunichi Taniguchi, Director | Senior Researcher, Lightblue Co., Ltd.
"Regarding language capabilities, we found that Alibaba Cloud’s Tongyi Qianwen (Qwen) not only performed well in English, but also proved to be the best publicly available option for supporting Japanese. We chose Qwen because our LLM’s accuracy significantly improved when fine-tuned with a base model capable of understanding Japanese."
Tina Chen, Chief Digital Officer, Shiseido China
"Selecting Alibaba Cloud Services as our partner was a wise decision rooted in their demonstrated expertise in data analysis, AI services, and language models. Their understanding of applications and services has been pivotal in enhancing our consumer experience and technology innovation."
Kazuya Hodatsu, Representative Director of Axcxept
“Qwen2.5 has enhanced its performance in base Japanese processing, providing it with an edge over other models. Axcxept's proprietary training process has led to the development of a Japanese LLM with the highest level of accuracy."
Susan Gu, General Manager of Haleon Mainland China and Hong Kong
"We are honored to collaborate with Alibaba Cloud in this venture. Haleon's personalized nutrition platform, the good deposit Keyijia, will be able to provide consumers with efficient and personalized health support services empowered by Haleon's proprietary Knowledge Graph and the AI nutrition assistant enabled by Qwen."

What Qwen Can Do

icon
Understand Multimodal Data
You can use Qwen models to build a chat assistant that interacts with users intelligently and comprehensively and understands multimodal data, including text, audio, video, and more.


Chatbot based on Qwen, Qwen-Audio, and Qwen-VL answering questions containing multimodal data
Generate Images
Based on text prompts and input images, Qwen-VL can produce high-quality images in various styles and genres for different industry-specific scenarios.

Qwen-VL producing a cartoon-style human portrait based on prompts
Analyze Images
Qwen-VL learns and analyzes objects and texts in images, and creates new content based on its learning.


Qwen-VL recognizing the objects (the woman and the dog) in the image and their gestures (high five)
Understand and Analyze Audio
Qwen-Audio can accept diverse types of audio (such as human speech, natural sounds, instrumental music, and songs) and text as inputs, understand the audio content, and summarize information such as music genres and emotions of the speaker. It can also use tools to edit the audio files.

Qwen-Audio analyzing the identity and emotions of the speaker and recommending replies
Understand Structured Data
Qwen 2.5 understands structured data (such as tables) better. This contributes to extracting insightful information from structured data, helping users perform queries, and generating new datasets.


Qwen2.5-72B providing formatted output based on the requirement and input data (table in JSON format)
Generate JSON Code
Qwen 2.5 offers improved and more reliable generation of structured outputs, especially in JSON format.

Qwen2.5-72B generating JSON code step by step with explanations as requested
Generate Long Text
Qwen2.5 significantly improves long text generation, increasing from 1K to over 8K tokens.


Qwen2.5-72B writing a report of over 5,000 Chinese characters on the requested subject

Try Qwen Models on Alibaba Cloud Model Studio

Qwen-Agent: Developing AI Agents and Applications in Simple Steps
Qwen-Agent is a framework for developing LLM applications based on the instruction following, tool usage, planning, and memory capabilities of Qwen models. It provides various components for LLMs, prompts, and agents. Follow this tutorial and learn to use the Assistant component to add customized tools and quickly develop an agent that uses these tools.
Learn More on GitHub

Contact Us

Contact Alibaba Cloud AI experts to learn more about Qwen model family

User Information

Company Information

phone Contact Us

Chat now with Alibaba Cloud Customer Service to assist you in finding the right products and services to meet your needs.

alicare alicarealicarealicare