This topic describes how to update data by calling an API operation.
1. Create a table
You can create a table and import the full data in an Object Storage Service (OSS), MaxCompute, or Data Lake Formation (DLF) data source to the table.

For more information about how to create a table, see the following topics:
After the table is configured, wait until the status of the table becomes In Use. Then, you can update data in the table by calling an API operation.

2. Configure the public access whitelist
If you access an OpenSearch Vector Search Edition instance from the virtual private cloud (VPC) in which the instance resides by using the same vSwitch, skip this step.
For more information about how to access an OpenSearch Vector Search Edition instance from an on-premises environment or the Internet, see Configure the public access whitelist.
3. Push data to the table
The following sample code provides an example on how to use SDK for Python to push data to a table.
Add dependencies:
pip install alibabacloud-ha3engine-vectorDemo code for pushing data:
# -*- coding: utf-8 -*-
from alibabacloud_ha3engine_vector import models, client
from Tea.exceptions import TeaException, RetryError
Config = models.Config(
endpoint="<API endpoint>", # // The API endpoint. You can view the API endpoint in the API Endpoint section of the Instance Details page. You must remove the http:// prefix.
instance_id="<Instance ID>", # // The instance ID. You can view the instance ID in the upper-left corner of the Instance Details page. Example: ha-cn-i7*****605.
protocol="http",
access_user_name="<Username>", # // The username. You can view the username in the API Endpoint section of the Instance Details page.
access_pass_word="<Password>" # // The password. You can view the password in the API Endpoint section of the Instance Details page.
)
# Initialize the engine client.
ha3EngineClient = client.Client(Config)
def push():
# The name of the table to which the document is pushed. Format: <Instance ID>_<Table name>.
tableName = "<instance_id>_<table_name>";
try:
# Add a document.
# If the document already exists, the existing document is deleted before the specified document is added.
# =====================================================
# Update the content of the document.
add2DocumentFields = {
"id": 1, # The ID of the primary key field. The value is of the INT type.
"cate_id": "123", # The value of this single-value field is of the STRING type.
"vector": "a\x1Db\x1Dc\x1Dd" # The value of this multi-value field is of the STRING type.
}
# Add the document content to an add2Document structure.
add2Document = {
"fields": add2DocumentFields,
"cmd": "add" # The add command indicates that the document is added.
}
optionsHeaders = {}
# The structure added to specify document operations in the outer structure that is used to push document data. You can specify one or more document operations in the structure.
documentArrayList = []
documentArrayList.append(add2Document)
pushDocumentsRequest = models.PushDocumentsRequest(optionsHeaders, documentArrayList)
# The primary key field of the document whose data is to be pushed.
pkField = "id"
# Use the default runtime parameters for the request.
response = ha3EngineClient.push_documents(tableName, pkField, pushDocumentsRequest)
print(response.body)
print(response.body)
except TeaException as e:
print(f"send request with TeaException : {e}")
except RetryError as e:
print(f"send request with Connection Exception : {e}")
if __name__ == "__main__":
push()Note:
If you use SDK for Python, you must remove the http:// prefix when you specify the endpoint parameter.
The value of the dataSourceName parameter must be in the following format: Instance ID_Table name. In this example, the value is ha-cn-uqm3e6y1k04_kevintest.

If {"status": "OK", "code": 200} is returned in the response, the request is successful.
4. Verify data
Verify data in a query test by using the primary key or vector.
For more information about the query syntax, see Primary key-based query and Vector-based query.
SDK references
For more information about SDKs for other programming languages, see Query data.