Large language model (LLM) refers to a neural network language model with hundreds of millions of parameters, such as Generative Pre-Trained Transformer 3 (GPT-3), GPT-4, Pathways Language Model (PaLM), and PaLM 2. When you need to process large amounts of natural language data or want to build complex language understanding systems, you can use an LLM as an inference service and call APIs to easily integrate advanced natural language processing (NLP) capabilities such as text classification, sentiment analysis, and machine translation into your applications. With the LLM-as-a-service mode, you do not need to pay high infrastructure costs and can respond quickly to market changes. Furthermore, you can expand services at any time to cope with user request spikes and improve operational efficiency because the LLM runs on the cloud.
Prerequisites
The Model Service Mesh (hereinafter referred to as ModelMesh) feature is enabled in your Service Mesh (ASM) instance and the ASM environment is configured. For more information, see Step 1 and Step 2 in Use ModelMesh to roll out a multi-model inference service.
You have learned how to use ModelMesh to create custom model serving runtimes. For more information, see Use ModelMesh to create a custom model serving runtime.
Step 1: Build a custom runtime
Build a custom runtime to serve the Hugging Face LLM with prompt tuning configuration. In this example, the default values are set to the pre-built custom runtime image and pre-built prompt tuning configuration.
Implement a class that inherits from the MLModel class of MLServer.
The peft_model_server.py file contains all the code on how the Hugging Face LLM with prompt tuning configuration is being served. The
_load_model
function in the file is used to choose a pretrained LLM model with the PEFT prompt tuning configuration trained. The_load_model
function also defines a tokenizer that can encode and decode raw string inputs from the inference requests without asking users to preprocess their input into tensor bytes.Build a Docker image.
After the model class is implemented, you need to package its dependencies, including MLServer, into an image that is supported as a ServingRuntime resource. You can refer to the following Dockerfile to build an image:
Create a new ServingRuntime resource.
Create a new ServingRuntime resource by using the following content and point it to the image you created.
Run the following command to deploy the ServingRuntime resource:
kubectl apply -f sample-runtime.yaml
After you create the ServingRuntime resource, you can see the new custom runtime in your ModelMesh deployment.
Step 2: Deploy an LLM service
To deploy a model by using the newly created runtime, you must create an InferenceService resource to serve the model. This resource is the main interface used by KServe and ModelMesh to manage models. It represents the logical endpoint of the model for serving inferences.
Create an InferenceService resource to serve the model by using the following content:
In the YAML file, the InferenceService resource is named
peft-demo
and its model format is declared aspeft-model
, which is the same format as the example custom runtime created in the previous step. An optional fieldruntime
is also passed, explicitly instructing ModelMesh to use thepeft-model-server
runtime to deploy this model.Run the following command to deploy the InferenceService resource:
kubectl apply -f ${Name of the YAML file}.yaml
Step 3: Perform an inference
Run the curl
command to send an inference request to the LLM service deployed in the previous step.
MODEL_NAME="peft-demo"
ASM_GW_IP="IP address of the ingress gateway"
curl -X POST -k http://${ASM_GW_IP}:8008/v2/models/${MODEL_NAME}/infer -d @./input.json
input.json
in the curl
command indicates the request data:
{
"inputs": [
{
"name": "content",
"shape": [1],
"datatype": "BYTES",
"contents": {"bytes_contents": ["RXZlcnkgZGF5IGlzIGEgbmV3IGJpbm5pbmcsIGZpbGxlZCB3aXRoIG9wdGlvbnBpZW5pbmcgYW5kIGhvcGU="]}
}
]
}
The value of bytes_contents
is the Base64 encoded content of the string "Every day is a new beginning, filled with opportunities and hope"
.
The following code block shows the JSON response:
{
"modelName": "peft-demo__isvc-5c5315c302",
"outputs": [
{
"name": "output-0",
"datatype": "BYTES",
"shape": [
"1",
"1"
],
"parameters": {
"content_type": {
"stringParam": "str"
}
},
"contents": {
"bytesContents": [
"VHdlZXQgdGV4dCA6IEV2ZXJ5IGRheSBpcyBhIG5ldyBiaW5uaW5nLCBmaWxsZWQgd2l0aCBvcHRpb25waWVuaW5nIGFuZCBob3BlIExhYmVsIDogbm8gY29tcGxhaW50"
]
}
}
]
}
The following code block shows the Base64-decoded content of bytesContents
. It indicates that the inference request is performed on the LLM service as expected.
Tweet text : Every day is a new binning, filled with optionpiening and hope Label : no complaint