We recommend using ECS for the backend and front end. If the user will use open-source, the LLM (Large Language Model) can be used GPU-ECS or Platform for AI (Artificial Intelligence). This tutorial will cover the entire infrastructure and retriever besides the LLM part. We assume the user has LLM and its API key. The data will be saved in AnalyticDB for PostgreSQL (ADBPG), below high-level architecture.
1. Login to Alibaba Cloud Console and create an instance of AnalyticDB for PostgreSQL.
2. We chose the following for test purposes:
3. Create an account to connect to DB
4. We need to enable the "Vector Engine Optimization" to use the vector database.
5. We recommend using ECS, where they will be installed: UI Configure an IP address whitelist.
6. Prepare ECS
1. Assume the user has already login to Alibaba Cloud Console (following section 1.1)
2. Create an ECS instance. We recommend using ecs.g7.2xlarge with the parameters for testing purposes:
3. Connect to the ECS through SSH
pip install virtualenv
virtualenv env_name
command. We recommend replacing env_name
with the user-created one.source env_name/bin/activate
command.deactivate
.This app uses Python 3.10 and Poetry for dependency management.
Install Python 3.10 onto the machine if it still needs to be installed. Depending on the system, the Poetry can be downloaded from the official Python website or with a package manager like Brew or apt. Activate the virtual environment prepared in section 2.2.2.
1. Clone the repository from GitHub: git clone https://github.com/openai/chatgpt-retrieval-plugin.git
2. Navigate to the cloned repository directory: cd /path/to/chatgpt-retrieval-plugin
3. Install poetry: pip install poetry
4. Create a new virtual environment that uses Python 3.10:
poetry env use python3.10
poetry shell
5. Install app dependencies using poetry: poetry install
Note: If adding dependencies to the project. Tool, make sure to run poetry lock and poetry install.
The API requires the following environment variables to work:
Name | Required | Description |
DATASTORE | Yes | This specifies the vector database provider to store and query embeddings. |
BEARER_TOKEN | Yes | This secret token is needed to authenticate API requests. It can be generated using any tool or method the user prefers (such as jwt.io). |
LLM_API_KEY | Yes | This is the LLM API key that needed to be put in. |
You need to set the requisite environment variables with the export command to run the API locally:
export DATASTORE=<datastore>
export BEARER_TOKEN=<bearer_token>
export LLM_API_KEY=<llm_api_key>
export PG_HOST=<dbhost>
export PG_PORT=5432
export PG_DATABASE=<db>
export PG_USER=<dbuser>
export PG_PASSWORD=<dbuser-password>
The variables above could be written in a global environment via the following instructions:
nano ~/.bashrc
to open the bashrc file in the nano text editorexport VARIABLE_NAME=value
. Replace VARIABLE_NAME
with the variable's name and value with the value which wanted to assign to it.source ~/.bashrc
in the terminal to reload the bash file.Start the API with: poetry run start
Append docs to the URL in the terminal and open it in a browser to access the API documentation and try out the endpoints (i.e., http://0.0.0.0:8000/docs
). Make sure to enter the correct bearer token and test the API endpoints.
Note: If added new dependencies to the project.toml
file, run poetry lock and poetry install to update the lock file and install the new dependencies.
Within the scripts folder exist scripts built for upserting or processing text documents from various data sources, including JSON files, JSONL files, and zip files. These scripts utilize the plugin's upsert utility functions, which convert the documents into plain text and divide them into chunks before uploading them to the vector database along with their metadata. Each script folder has a README file outlining how to use the script and the required parameters. It is also possible to use the services.pii_detection
module to screen the documents for personally identifiable information (PII) and exclude any documents that contain it to avoid unintentionally uploading sensitive or private data to the vector database.
Furthermore, it can be used in the services.extract_metadata
module to extract metadata from the document text, which can enrich the document metadata. It is worth noting that if the user uses incoming webhooks to synchronize data continuously, the user should run a backfill after setting them up to ensure no data is missed.
The following scripts are available:
All three types of scripts support the custom metadata as a JSON string and enable flags to screen for PII and extract metadata.
# Upload prepared zipped *.md file to the /llm-retrieval-plugin/scripts/process_zip
source env_name/bin/activate
poetry shell
python scripts/process_zip/process_zip.py --filepath srcripts/process_zip/<upload_file_name.zip>
psql -h host -p <server port: default 5432> -U username -d database_name
to connect to the PostgreSQL database. Replace the host with the host server address, username with the PostgreSQL username, and database_name with the database name the user wants to connect to.SELECT COUNT(*) FROM document_chunks
SELECT
statement followed by the column's name, which needs to be shown the SELECT
statement can be used. For example, SELECT content FROM document_chunks
TRUNCATE TABLE document_chunks;
command. The above command will delete all the rows in the table, so use it with caution.We offer a simple WebUI, which have done with a flask. This WebUI is only for reference and cannot be put in a production environment.
Follow these steps to run a ready website Flask application with Python:
http://localhost:5000/
or the URL specified by the Flask application to see the website in action. It should open a page similar to the page shown below.Congratulations! Now successfully running LLM + ADBPG with your data in Alibaba Cloud.
One of the popular commercial LLM ChatGPT by OpenAI officially added the AnalyticDB retrieval plugin to the list of vector databases.
We believe that ADBPG in the Generative AI era has the potential to revolutionize the way businesses and organizations analyze and use data. If you're interested in learning more about our software solution and how it can benefit your organization, please don't hesitate to contact us. We're always happy to answer your questions and provide a demo of our software.
Generative AI, Composer - the Next Level of Stable Diffusion
Alibaba Cloud Community - January 4, 2024
Farruh - January 22, 2024
ApsaraDB - May 15, 2024
Alibaba Cloud Community - May 19, 2023
Farruh - July 18, 2024
Alibaba Cloud Indonesia - July 5, 2023
An online MPP warehousing service based on the Greenplum Database open source program
Learn MoreA platform that provides enterprise-level data modeling services based on machine learning algorithms to quickly meet your needs for data-driven operations.
Learn MoreAnalyticDB for MySQL is a real-time data warehousing service that can process petabytes of data with high concurrency and low latency.
Learn MoreAccelerate AI-driven business and AI model training and inference with Alibaba Cloud GPU technology
Learn MoreMore Posts by Farruh