Limits on the system
Item | Value |
---|---|
Number of applications per user | Unlimited |
Total number of documents per user | Theoretically unlimited. The total number of documents per user is calculated based on the document capacity of applications. |
Total number of PVs per user | Theoretically unlimited. The total number of page views (PVs) per user is calculated based on the peak computing resource consumption of applications. |
Supported encoding format for Chinese characters | UTF-8 |
Limits on applications
Item | Maximum value |
---|---|
Length of an application name | 30 characters |
Length of a field name | 30 characters |
Length of a sort expression | 30 characters |
Number of secondary tables | 10 |
Number of fields in an application | 256 |
Length of a source table name | 16 characters |
Length of an index field name | 64 characters |
Levels of external tables that can be joined to a source table | 2 |
Number of fields of the INT type | 256 (Up to four fields of the INT type are supported for numerical analysis.) |
TIMESTAMP | 4 |
GEO_POINT | 2 |
Number of fields of the LITERAL type (not supported by composite indexes) | 256 |
Number of fields of the TEXT type | 32 |
Number of composite indexes | 4 |
Number of fields in a composite index | 8 |
Number of indexes on a single field of the TEXT type | 32 |
Number of indexes on a single field of the LITERAL type | 256 |
Limits on fields
Item | Value |
---|---|
INT64 | -2^63~2^63-1 |
FLOAT | +/-3.40282e+038 |
DOUBLE | +/-1.79769e+308 |
LITERAL | Up to 65,535 bytes |
TEXT | Up to 65,536 words |
SHORT_TEXT | This type of field cannot exceed 100 bytes in length. Otherwise, the field will be truncated. |
LITERAL_ARRAY | This type of field cannot exceed 65,535 bytes in length (Including built-in delimiter between field elements, which is 2 bytes in length). If the field exceeds 65,535 bytes in length, the field will be truncated at a complete element that is up to 65,535 bytes in length. A large number of elements in a field of this type require the high query performance of the system. Thus, we recommend that you construct this type of field with no more than 100 elements. |
INT_ARRAY, FLOAT_ARRAY, DOUBLE_ARRAY | If these types of fields are set as attribute fields, up to 65,535 elements are allowed. We recommend that you construct these types of field with no more than 100 elements. |
Limits on sort expressions
Item | Maximum value |
---|---|
Number of rough sort expressions | 30 |
Number of fine sort expressions | 30 |
Number of allowed characteristic functions per rough sort expression | 4 |
Limits on search result summaries
Item | Description | Value range |
---|---|---|
Segment length | The length of a search result summary. | 1 to 300 bytes |
Number of segments | The number of segments that are required in a search result summary. | [1-5] |
Limits on pushing data to a standard application
Item | Maximum value |
---|---|
Total number of documents to be pushed per API request | 1,000. To get a better performance, we recommend that you push 100 documents at a time. You can also package the documents to be pushed. |
Total number of pushes per second per API request | 500 |
Total capacity per API request | 2 MB before encoding |
Total capacity per second per API request | 2 MB before encoding |
Rate of synchronizing incremental data from ApsaraDB RDS for MySQL | 2 MB/s before encoding |
Size of each document | 1 MB |
Latency of incremental data synchronization | After a number of documents are pushed to OpenSearch, 99% of the documents can be searched in 1 second. 99.9% of the documents can be searched in 1 minute. |
Limits on pushing data to an advanced application
Item | Maximum value |
---|---|
Total number of documents to be pushed per API request | 1,000. To get a better performance, we recommend that you push 100 documents at a time. You can also package the documents to be pushed. |
Total number of pushes per second per API request | 500 |
Total capacity per API request | 2 MB before encoding |
Total capacity per second per API request | 2 MB before encoding |
Rate of synchronizing incremental data from ApsaraDB RDS for MySQL | 2 MB/s before encoding |
Number of updates on primary and secondary tables that are triggered by database synchronization and API operation calls | 1,500 |
Number of updates on the primary table that are triggered by updates on secondary tables | 1,500 |
Size of each document | 1 MB |
Latency of incremental data synchronization | After a number of documents in the primary table are pushed to OpenSearch, 90% of the documents can be searched in 10 seconds. 99% of the documents can be searched in 10 minutes. For more information about data synchronization of secondary tables, see Synchronization latency caused by joined tables. |
Non-printable reserved characters that cannot be contained in data
Encoding | Display pattern in Emacs or Vi |
---|---|
"\x1E\n" | ^^ |
"\x1F\n" | ^_ |
"\x1D" | ^] |
"\x1C" | ^\ |
"\x1D" | ^] |
"\x03" | ^C |
Limits on searches
Item | Value |
---|---|
Maximum length of a clause (except for the FILTER clause) | 1 kilobyte before encoding |
Maximum length of the FILTER clause | 4 kilobytes before encoding |
Maximum number of returned results for each request | 500 |
Maximum number of returned results | 5,000 |
Number of documents that are sorted based on rough sort | 1 million |
Number of documents that are sorted based on fine sort | 200 |
Limits on reindexing
Item | Maximum value |
---|---|
Rate of reading data from data sources | 20 MB/s |
Incremental synchronization rate of real-time data | 10 MB/s |
Rate of synchronizing batch data to OpenSearch | 20 MB/s |
Note: For more information, see Real-time data synchronization in OpenSearch |
If an index fails to be rebuilt for a long time or the data synchronization latency is high, submit a ticket for technical support.