×
Community Blog Deploying Alibaba Cloud Large Language Model (Tongy Qianwen) with Graphical and Command Line Interfaces

Deploying Alibaba Cloud Large Language Model (Tongy Qianwen) with Graphical and Command Line Interfaces

In this blog, we will DeployAlibaba Cloud Large Language Model (Tongy Qianwen-7B) with Graphical and Command Line Interfaces

Tongyi Qianwen-7B (Qwen-7B) is a 7 billion parameter scale model of the Tongyi Qianwen large model series developed by Alibaba Cloud.

Qwen-7B is a large language model (LLM) based on Transformer, which is trained on extremely large-scale pre-training data. The pre-training data types are diverse and cover a wide range of areas, including a large number of online texts, professional books, codes, etc.

To see detailed description of the model, please visit the following:

https://www.modelscope.cn/models/qwen/Qwen-7B-Chat/files

In this article, we will explore two approaches for interacting with Tongyi Qianwen-7B model, one using a Graphical User Interface (GUI) and the other through Command Line Interface (CLI).

Method-1: GUI Based Implementation of the Model on Alibaba Cloud:

Please note:

  1. As of the time of writing this article, PAI service is not available in all regions. We will use Singapore region
  2. Activation of the Alibaba Cloud account with the PAI service is a prerequisite
  3. It's necessary to create a workspace within PAI. In this guide, the workspace is referred to as "JawadML," but your workspace name might differ.

Steps:

1.  On the PAI platform, select your workspace and under the Model Deployment, select Elastic Algorithm Service (EAS) then click Deploy Service.

1

2.  We will use the pre-trained model from ModelScope by using the following configurations for selecting the image and environment variables.

ModelScope is an open-source Model-as-a-Service (MaaS) platform, developed by Alibaba cloud, that comes with hundreds of AI models, including large pre-trained models for global developers and researchers.

2

Alibaba Cloud PAI provides various options for hardware that can run the machine learning models for inference. In this case, we may use a GPU with sufficient memory. Select the following configurations and click Deploy.

3
4

This will start deploying the pre-trained Qwen-7B model on the selected hardware. The process may take some time so it is advisable to check various events during the deployment process.

3.  Checking the creation process

To check the deployment events and make sure that everything is fine, click on the Service ID/Name (in our case it is qwenmodel) and then select Deployment Events.

5

Similarly, Service Logs can be used to see the packages installation related logs, as shown below:

6

Once the model is successfully deployed, the service status will change to Running.

7

To use the model for inference, click on View Web App. This will open the GUI interface of the model which can be used for testing to generate text.

4.  Testing: Qwen-7B to generate python code

8

It is important to note that using GUI mode, the model processes full input text and then generate the entire text.

5.  Upgrading if the you want to improve the performance (Optional)

The hardware configuration can be changed if needed so. As shown below, clicking on Update Service will open up configuration window.

9

Following figure shows how to change the hardware configuration of the inference engine. Once the configuration is changed, click on Deploy.

10
11

The configuration changes takes some time. Once complete, the Service Status will turn from Upgrading to Running.

12

Method-2: CLI Based Implementation of the Model on Alibaba Cloud:

We can implement Qwen 7-B in CLI mode which can generate a running text response. For this purpose we can either use PAI interactive Modeling (DSW) or ECS instance. We will discuss the DSW based implementation.

1.  On PAI, select Interactive Modeling (DSW) and then click Create Instance. This open the configuration window.

13

For the configuration, select the proper GPU specifications and image type as shown below.

14
15

Click Next to go the final page. Select, create instance which will start spinning up the DSW instance.

16

The initial status of instance will indicate Preparing Resources.

2.  After the instance status turns into running, click on the Turn on option on the right side. It will launch the Jupter lab.

17

3.  We will not use Jupyter Lab here. Instead, we will go the terminal to download the model from Github for deployment.

18

The best way would be to create python environment first. Once the virtual environment is created, active it and use the following commands in the terminals:

a) git clone https://github.com/QwenLM/Qwen
b) cd Qwen/
c) pip install -r requirements.txt
d) python3 cli_demo.py

19

Please grab a cup of coffee and wait patiently for the installation to complete.

20

Once completed, you will see the Command line prompt:

21

4.  Testing the model:

To test the model, directly type the text at the CLI interface. In my case, I use the prompt: how to create python environment for installation.

As shown below, it produces the python code for creating and activating virtual environment.

22

Try different questions and enjoy the with Qwen. 😀

0 1 0
Share on

JwdShah

8 posts | 4 followers

You may also like

Comments