The Assistant API equips developers with a suite of tools for managing conversation messages and using tools. This topic describes the basic coding techniques of the Assistant API by walking you through the creation of a painting assistant from the ground up.
Typical process
A typical process for developing an assistant is:
Create an assistant: When creating an assistant, select a model, input instructions, and add tools such as code interpreter and function calling.
Create a thread: Initiate a session thread to maintain the continuity of the conversation when a user starts a dialogue.
Send message to thread: Add user messages to the session thread one by one.
Initiate a run: Execute the assistant on the session thread. It will interpret the messages, use appropriate tools or services, generate responses, and deliver the responses to the user.
Example scenario
Text generation models alone cannot produce images. Instead, a text-to-image model is required to transform text into images. An assistant built with the Assistant API can automatically improve user-provided prompts and use a text-to-image tool to create high-quality images.
Procedure
The following section is a comprehensive guide in Python for the non-streaming output mode. The complete sample codes for both streaming and non-streaming outputs in Python and Java are presented at the end of this topic, see Complete sample code.
Step 1: Prepare the development environment
|
|
Step 2: Create an assistantAfter importing the DashScope SDK, create an assistant using the create method of the assistant class. Specify the following key parameters:
Because the text-to-image tool has high language understanding requirements, Qwen-Max is selected as the text inference model to enhance the semantic understanding and text generation capabilities of the assistant. The details of the assistant, including its name, function description, and instructions, are displayed in the sample code. The official plug-in Image Generation is configured for the assistant so that it can generate images based on text descriptions automatically. |
|
Step 3: Create a threadIn the Assistant API, a thread represents a continuous conversation context. The assistant uses the thread to understand the conversation context and provide more coherent and relevant responses. We recommend that you:
In the painting assistant scenario, the thread tracks the entire process, including
This ensures the process is coherent and traceable. |
|
Step 4: Add message to threadInput messages from the user are passed through the message object. The Assistant API supports sending multiple messages to a single thread. When creating a message, specify the following parameters:
The number of tokens a thread can receive is not limited. However, the tokens passed to the model is limited to the maximum input length of the model. For more information, see Commercial Qwen models. In the painting assistant scenario, create a message class and send "Draw a picture of a ragdoll cat." in the thread. |
|
Step 5: Create and execute a runAfter a message is assigned to a specific thread, initiate a run to activate the assistant. The assistant intelligently uses the specified model and plug-ins to anwser the question based on the context. Then, the assistant inserts the responses into the message sequence of the thread. Perform the following steps:
Note Because multiple users may use the model simultaneously, potentially extending processing time, we recommend that you wait until the status is "complete" before proceeding. |
|
Complete sample code
Non-streaming output
import dashscope
from http import HTTPStatus
import json
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
def check_status(component, operation):
if component.status_code == HTTPStatus.OK:
print(f"{operation} succeeded.")
return True
else:
print(f"{operation} failed. Status code: {component.status_code}, Error code: {component.code}, Error message: {component.message}")
return False
# 1. Create a painting assistant
painting_assistant = dashscope.Assistants.create(
model='qwen-max',
name='Art Maestro',
description='AI assistant for painting and art knowledge',
instructions='''Provide information on painting techniques, art history, and creative guidance.
Use tools for image generation.''',
tools=[
{'type': 'text_to_image', 'description': 'For creating visual examples'}
]
)
if not check_status(painting_assistant, "Assistant creation"):
exit()
# 2. Create a new thread
thread = dashscope.Threads.create()
if not check_status(thread, "Thread creation"):
exit()
# 3. Send a message to the thread
message = dashscope.Messages.create(thread.id, content='Draw a picture of a ragdoll cat.')
if not check_status(message, "Message creation"):
exit()
# 4. Run the assistant on the thread
run = dashscope.Runs.create(thread.id, assistant_id=painting_assistant.id)
if not check_status(run, "Run creation"):
exit()
# 5. Wait for the run to complete
print("Waiting for the assistant to process the request...")
run = dashscope.Runs.wait(run.id, thread_id=thread.id)
if check_status(run, "Run completion"):
print(f"Run completed, status: {run.status}")
else:
print("Run not completed.")
exit()
# 6. Retrieve and display the assistant's response
messages = dashscope.Messages.list(thread.id)
if check_status(messages, "Message retrieval"):
if messages.data:
# Display the content of the last message (assistant's response)
last_message = messages.data[0]
print("\nAssistant's response:")
print(json.dumps(last_message, ensure_ascii=False, default=lambda o: o.__dict__, sort_keys=True, indent=4))
else:
print("No messages found in the thread.")
else:
print("Failed to retrieve the assistant's response.")
# Note: This code creates a painting assistant, initiates a conversation about drawing a ragdoll cat,
# and displays the assistant's response.
package com.example;
import java.util.Arrays;
import com.alibaba.dashscope.protocol.Protocol;
import com.alibaba.dashscope.assistants.Assistant;
import com.alibaba.dashscope.assistants.AssistantParam;
import com.alibaba.dashscope.assistants.Assistants;
import com.alibaba.dashscope.common.GeneralListParam;
import com.alibaba.dashscope.common.ListResult;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.InvalidateParameter;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.threads.AssistantThread;
import com.alibaba.dashscope.threads.ThreadParam;
import com.alibaba.dashscope.threads.Threads;
import com.alibaba.dashscope.threads.messages.Messages;
import com.alibaba.dashscope.threads.messages.TextMessageParam;
import com.alibaba.dashscope.threads.messages.ThreadMessage;
import com.alibaba.dashscope.threads.runs.Run;
import com.alibaba.dashscope.threads.runs.RunParam;
import com.alibaba.dashscope.threads.runs.Runs;
import com.alibaba.dashscope.tools.T2Image.Text2Image;
import com.alibaba.dashscope.tools.search.ToolQuarkSearch;
import com.alibaba.dashscope.utils.Constants;
public class PaintingAssistant {
static {
Constants.baseHttpApiUrl="https://dashscope-intl.aliyuncs.com/api/v1";
}
private static boolean checkStatus(Object response, String operation) {
if (response != null) {
System.out.println(operation + " succeeded.");
return true;
} else {
System.out.println(operation + " failed.");
return false;
}
}
public static void main(String[] args) {
try {
// 1. Create a painting assistant
Assistants assistants = new Assistants();
AssistantParam assistantParam = AssistantParam.builder()
.model("qwen-max")
.name("Art Maestro")
.description("AI assistant for painting and art knowledge")
.instructions("Provide information on painting techniques, art history, and creative guidance. Use tools for research and image generation.")
.tools(Arrays.asList(ToolQuarkSearch.builder().build(),Text2Image.builder().build()))
.build();
Assistant paintingAssistant = assistants.create(assistantParam);
if (!checkStatus(paintingAssistant, "Assistant creation")) {
System.exit(1);
}
// 2. Create a new thread
Threads threads = new Threads();
AssistantThread thread = threads.create(ThreadParam.builder().build());
if (!checkStatus(thread, "Thread creation")) {
System.exit(1);
}
// 3. Send a message to the thread
Messages messages = new Messages();
ThreadMessage message = messages.create(thread.getId(),
TextMessageParam.builder()
.role("user")
.content("Draw a picture of a ragdoll cat.")
.build());
if (!checkStatus(message, "Message creation")) {
System.exit(1);
}
// 4. Run the assistant on the thread
Runs runs = new Runs();
RunParam runParam = RunParam.builder().assistantId(paintingAssistant.getId()).build();
Run run = runs.create(thread.getId(), runParam);
if (!checkStatus(run, "Run creation")) {
System.exit(1);
}
// 5. Wait for the run to complete
System.out.println("Waiting for the assistant to process the request...");
while (true) {
if (run.getStatus().equals(Run.Status.COMPLETED) ||
run.getStatus().equals(Run.Status.FAILED) ||
run.getStatus().equals(Run.Status.CANCELLED) ||
run.getStatus().equals(Run.Status.REQUIRES_ACTION) ||
run.getStatus().equals(Run.Status.EXPIRED)) {
break;
}
Thread.sleep(1000);
run = runs.retrieve(thread.getId(), run.getId());
}
if (checkStatus(run, "Run completion")) {
System.out.println("Run completed, status: " + run.getStatus());
} else {
System.out.println("Run not completed.");
System.exit(1);
}
// 6. Retrieve and display the assistant's response
ListResult<ThreadMessage> messagesList = messages.list(thread.getId(), GeneralListParam.builder().build());
if (checkStatus(messagesList, "Message retrieval")) {
if (!messagesList.getData().isEmpty()) {
// Display the last message (assistant's response)
ThreadMessage lastMessage = messagesList.getData().get(0);
System.out.println("\nAssistant's response:");
System.out.println(lastMessage.getContent());
} else {
System.out.println("No messages found in the thread.");
}
} else {
System.out.println("Failed to retrieve the assistant's response.");
}
} catch (ApiException | NoApiKeyException | InputRequiredException | InvalidateParameter | InterruptedException e) {
e.printStackTrace();
}
}
}
Streaming output
Currently, the Java SDK does not support image generation tools in streaming output mode.
import dashscope
from http import HTTPStatus
import json
import sys
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
def check_status(response, operation):
if response.status_code == HTTPStatus.OK:
print(f"{operation} succeeded.")
return True
else:
print(f"{operation} failed. Status code: {response.status_code}, Error code: {response.code}, Error message: {response.message}")
sys.exit(response.status_code)
# 1. Create a painting assistant
def create_painting_assistant():
return dashscope.Assistants.create(
model='qwen-max',
name='Art Maestro',
description='AI assistant for painting and art knowledge',
instructions='''Provide information on painting techniques, art history, and creative guidance.
Use tools for research and image generation.''',
tools=[
{'type': 'text_to_image', 'description': 'For creating visual examples'}
]
)
if __name__ == '__main__':
# Create a painting assistant
painting_assistant = create_painting_assistant()
print(painting_assistant)
check_status(painting_assistant, "Assistant creation")
# Create a new thread with an initial message
thread = dashscope.Threads.create(
messages=[{
'role': 'user',
'content': 'Draw a picture of a ragdoll cat.'
}]
)
print(thread)
check_status(thread, "Thread creation")
# Create a streaming run
run_iterator = dashscope.Runs.create(
thread.id,
assistant_id=painting_assistant.id,
stream=True
)
# Iterate over events and messages
print("Processing request...")
for event, msg in run_iterator:
print(event)
print(msg)
# Retrieve and display the assistant's response
messages = dashscope.Messages.list(thread.id)
check_status(messages, "Message retrieval")
print("\nAssistant's response:")
print(json.dumps(messages, ensure_ascii=False, default=lambda o: o.__dict__, sort_keys=True, indent=4))
# Note: This script creates a streaming painting assistant, initiates a conversation about drawing a ragdoll cat,
# and displays the assistant's response in real-time.
What to do next
For more information about how to configure the parameters of the Assistant API, see Assistant API.