Database management
Category |
Feature |
Description |
Reference |
Instance management |
Create and release instances |
You can create and release Lindorm instances in the Lindorm console. |
|
Manage instance storage |
The management of storage capacity is a part of database management and involves the planning, allocation, configuration, monitoring, and scaling of the database storage capacity. |
Manage instance storage | |
Configure deletion protection |
You can enable deletion protection for your instances to prevent the instances on which important services are running from being accidentally released. After deletion protection is enabled for an instance, the instance cannot be released. To release the instance, disable deletion protection for the instance first and then release the instance. |
Configure deletion protection | |
Configure a maintenance window |
Alibaba Cloud performs maintenance operations on Lindorm instances on an irregular basis to ensure the stability of the instances. A maintenance window is a time period in which Alibaba Cloud is allowed to perform maintenance operations. |
Specify the maintenance window | |
Modify the configurations of an instance |
You can modify the configurations of a Lindorm instance in the Lindorm console. For example, you can deactivate an engine that is no longer in use or change the node specifications of an instance. |
Modify the configurations of an instance | |
Upgrade the minor engine version of an instance |
After a new minor version for an engine is released, you can upgrade the engine of your Lindorm instance to the latest minor version in the Lindorm console with a few clicks. A minor version upgrade provides optimizations for the features of the existing minor version and additional new features. |
Upgrade the minor engine version of a Lindorm instance | |
Manage tags |
If you have a large number of Lindorm instances, you can use tags to classify and filter these instances. You can create, attach, detach, and delete tags, and filter instances based on tags in the Lindorm console. |
Manage tags | |
Monitoring and alerting |
Create alert rules |
You can create alert rules for the key metrics of Lindorm instances. If the value of a metric is lower or greater than the specified threshold, the system automatically sends an alert notification to the contacts in the alert group. This way, you can handle the issue at the earliest opportunity. |
Create an alert rule |
View monitoring information |
You can view the monitoring information about each engine of a Lindorm instance in the Lindorm console. This way, you can check the usage of the common resources in your Lindorm instance, such as the CPU and memory. |
View monitoring information |
Multimodal capabilities
Category |
Feature |
Description |
Reference |
Multimodal engine integration |
Access LindormSearch from LindormTable |
Lindorm provides search indexes by integrating LindormTable with LindormSearch. Search indexes are used in complex multi-dimensional query scenarios, such as word segmentation, fuzzy queries, aggregate analysis, and sorting and paging. |
Access LindormSearch from LindormTable |
Access columnar indexes from LindormTable |
Lindorm supports columnar indexes in addition to high-performance queries based on the primary key and high-performance data search based on search indexes. Columnar indexes allow you to analyze and compute large amounts of data in wide tables more efficiently. |
- |
|
Access wide table data from LDPS |
You can use Lindorm SQL in LDPS to access data in and write data to wide tables |
Access data in LindormTable | |
Wide table engine (LindormTable) |
Dynamic columns |
LindormTable supports dynamic columns that allow you to dynamically write or query data in your business. This way, the design of table schema is simplified. |
Dynamic columns |
Secondary indexes |
LindormTable supports the secondary index feature in the Tabular model. In query scenarios where primary key columns are not specified in match conditions, native secondary indexes can help you reduce the complexity of application development, ensure data consistency, and improve write efficiency. |
Secondary indexes | |
Time series engine (LindormTSDB) |
Downsampling queries |
A downsampling query is an aggregate query that is performed based on a specified time interval. Downsampling queries are performed to decrease the sample rate in time series scenarios. |
Downsampling queries |
Search engine (LindormSearch) |
Single-value data query |
You can use the primary key or the unique index to quickly query or update a specific data record. |
Single-value data query |
Multi-dimensional searches |
Lindorm automatically builds indexes for tags in complex multi-dimensional queries. You can query data from multiple dimensions based on tags. |
- |
|
Custom dictionaries |
LindormSearch support custom dictionaries. You can update the dictionary files and stopword lists based on your requirements to improve search efficiency and user experience. |
Custom dictionaries | |
Compute engine (LDPS) |
Develop jobs |
LDPS allows you to develop jobs by using various methods. You can specify custom configuration items in resource specifications and sizes for each JDBC, Java, and Python job that you submit. |
Job development |
Access data |
LDPS allows you to access, read, and write data in various data sources, including columnar databases, Hive, wide tables, and Kafka. |
Data access | |
Manage jobs |
LDPS allows you to use the Lindorm console, DMS, and DataWorks to schedule computing jobs. This helps you efficiently complete distributed computing jobs in scenarios such as data production, interactive analytics, machine learning, and graph computing. |
Job management | |
Lindorm Ganos |
Spatio-temporal indexes |
LindormTable provides spatio-temporal indexes based on the primary key indexes and secondary indexes of Lindorm to accelerate spatio-temporal queries. Spatio-temporal indexes allow you to efficiently query and analyze spatio-temporal data. This way, you can understand and use spatio-temporal data more effectively. |
Spatio-temporal indexes |
Security and compliance
Category |
Feature |
Description |
Reference |
Multitenancy and security |
Authentication and ACL |
Lindorm supports ACLs and authentication based on username and password. |
Authentication and ACL |
Configure whitelists |
You can configure whitelists to control the access to Lindorm. By default, a Lindorm instance cannot be accessed by any device after it is created to ensure security and stability. Therefore, you must configure a whitelist for an Lindorm instance before you access it by using an external device. |
Configure a whitelist | |
Add security groups |
A security group is a virtual firewall that is used to manage the inbound and outbound traffic of specific Elastic Compute Service (ECS) instances. After a security group is added to the whitelist of a Lindorm instance, the ECS instances in the security group can access the Lindorm instance. |
Add a security group | |
Transparent Data Encryption (TDE) |
Lindorm supports the TDE feature. After you enable this feature for an instance, you can encrypt all data in the instance and all operational logs of the instance to ensure data security and privacy during transmission and storage. |
Enable the TDE feature | |
Audit logs |
You can use the audit log feature to accurately analyze all operations on data in a specified time period and filter data based on specified fields. |
Audit logs |
Ecosystem
Category |
Feature |
Description |
Reference |
Data import and export |
Migrate and synchronize data from HBase clusters to Lindorm |
You can migrate and synchronize data from HBase clusters to LindormTable in real time without service interruption. |
Migrate and synchronize data from HBase clusters to Lindorm. |
Import incremental data from Log Service |
You can import incremental data from Log Service to a Lindorm wide table in the Lindorm console. |
Import incremental data from Log Service | |
Migrate full data from TSDB to LindormTSDB |
You can migrate full data from TSDB to LindormTSDB. |
Migrate full data from TSDB to LindormTSDB | |
Import data from ApsaraDB for MongoDB |
You can use DataWorks to migrate offline data from ApsaraDB for MongoDB to LindormTable. |
Import data from ApsaraDB for MongoDB | |
Import data in batches |
You can use the bulkload feature to quickly import data in a stable manner. |
Import data in batches | |
Import data from Prometheus to LindormTSDB |
You can use DataX to migrate data from Prometheus Service (Prometheus) to LindormTSDB. |
Use DataX to import data from Prometheus | |
Migrate data from a self-managed HDFS cluster to LindormDFS |
You can use the Apache Hadoop distributed copy (DistCp) tool to migrate full data or incremental data from a self-managed Hadoop cluster to LindormDFS. |
Migrate data from a self-managed HDFS cluster to LindormDFS | |
Migrate data from an OSS bucket to LindormDFS |
You can migrate data from an Object Storage Service (OSS) bucket to LindormDFS |
Migrate data from an OSS bucket to LindormDFS | |
Archive data in a TP database to Lindorm |
You can use DMS to archive data in a TP database to Lindorm. |
- |
|
Integration with open source ecosystem |
Compatibility with Apache HDFS |
LindormDFS is 100% compatible with HDFS. You can use the HDFS shells or HDFS FUSE to access LindormDFS. |
|
Compatibility with Apache HBase and Cassandra |
LindormTable is compatible with the standard interfaces of Apache HBase and Cassandra. You can use various methods to develop applications based on Apache HBase APIs and Cassandra SQL. |
Application development guide | |
Compatibility with Apache Flink and Kafka |
The Lindorm streaming engine is compatible with Apache Flink and Kafka. This way, Lindorm provides the database and streaming services in an integrated manner. Compared with the traditional streaming data processing solution that consists of Apache Kafka, Apache Flink, and database services, Lindorm provides integrated storage, computing, and query capabilities, which simplify O&M operations and reduce development costs. |
Use Kafka to write data to the Lindorm streaming engine | |
Ease of use |
Lindorm provides unified SQL syntaxes for all engines. |
You can use SQL to perform operations in all Lindorm engines. |
Lindorm SQL syntax |
Storage
Category |
Feature |
Description |
Reference |
Data storage |
Data reading and writing |
Lindorm supports multiple data models, such as KV, document, and time series data models, and various query languages. You can use SQL or open source APIs to query and manage data. You can perform read and write operations based on your requirements. |
Read and write data |
Data compression |
In addition to the Snappy algorithm supported by Apache HBase, Lindorm also supports a variety of compression algorithms, such as dictionary compression and ZSTD. You can select different compression algorithms based on your business requirements. |
Data compression | |
Tiered storage |
Lindorm supports tiered storage of hot and cold data to reduce storage costs and improve storage efficiency. You can select the storage media based on the access frequency of data. |
Tiered storage |