Serverless Application Engine (SAE) is an application-oriented serverless PaaS platform. You do not need to manage or maintain the underlying infrastructure such as IaaS. You can use pay-as-you-go SAE resources to meet your business requirements. SAE provides an easy-to-use method to manage microservices and allows you to migrate your PHP applications to the cloud. This topic describes how to get started with SAE and provides multiple best practices for using SAE.
Background information
The first time you use SAE, we recommend that you watch the tutorial video to get familiar with SAE For more information, see What is Serverless App Engine?
Features
Procedure
The following figure shows how to use SAE.
The first time you deploy an SAE application, make sure that a virtual private cloud (VPC), vSwitch, and namespace are created. The namespace is used to isolate different runtime environments such as the test environment, staging environment, and production environment.
For more information, see Preparations.
Deploy an application in the SAE console.
For more information, see Application hosting overview. You can deploy applications in the SAE console. You can also use Jenkins, IDE plug-in, Maven plug-in, Terraform, and OpenAPI Explorer to deploy applications to SAE.
NoteThe first time you deploy an application to SAE, you must create an application in the SAE console.
Use one of the following methods to access an SAE application:
Method 1: Bind a Server Load Balancer (SLB) instance to an application. You must specify a port for each application. For more information, see Bind an SLB instance to an application.
Method 2: Configure gateway routing. You can access multiple applications from the same port. For more information, see Configure gateway routing for an application by using a CLB instance.
Method 3: Associate elastic IP addresses (EIPs) with an application. Each instance must be associated with an EIP. For more information, see Enable Internet access for SAE instances based on EIPs.
Configure more advanced features for SAE applications.
SAE provides the following advanced features and capabilities: enterprise-level permission control, elasticity and cost-effectiveness, optimized Java microservices, high availability, and storage.
Deployment
SAE allows you to deploy applications by using code packages and images. The code packages include WAR packages, JAR packages, and ZIP packages. When you create an SAE application, you can select the manual configuration method or automatic configuration method to specify a VPC, a vSwitch, and a security group for the application. You must also specify an instance type. After the application is created, you can change the instance type. The following section provides an example on how to configure settings to deploy an SAE application.
Startup command and parameters
Image-based deployment
You can specify parameters for a startup command in a Dockerfile. You can also overwrite the startup command in the SAE console. The following figure shows the Startup Command Settings section.
Code package-based deployment
In this example, a JAR package is used to deploy an application. You can specify parameters for a startup command in the SAE console. The following figure shows the Startup Command Settings section.
Database whitelist
Compared with the Elastic Compute Service (ECS) mode in which an application is deployed, the IP address of an instance may change when you deploy the application on SAE in container mode. If you want to use the SAE application to access a database, you must add the CIDR block of the vSwitch to the whitelist of the database. For more information, see Enable access from applications to Alibaba Cloud database instances.
CI/CD
SAE allows you to deploy applications in the console and by calling API operations. SAE also allows you to use continuous integration and continuous delivery (CI/CD) tools, such as Apsara DevOps and Jenkins, to automatically deploy applications after you submit code. For more information, see Application hosting overview.
Upgrade and rollback
SAE allows you to configure the following types of upgrade policies and rollback policies: single-batch release, phased release, canary release, and rollback. For more information, see Upgrade and roll back applications.
Other settings
Networks
After you deploy an application to SAE, you may have various network access requirements. For more information, see Concepts and capabilities related to SAE networks.
Alibaba Cloud network infrastructure
Virtual private cloud (VPC): A VPC is a private network on Alibaba Cloud. VPCs are logically isolated from each other.
NoteBy default, access from VPCs to the Internet is denied.
vSwitch: A vSwitch is a basic network component that connects different cloud resources in a VPC. A vSwitch corresponds to a physical server. When you create a cloud resource in a VPC, you must specify a vSwitch to which the cloud resource is connected.
Elastic IP address (EIP): You can associate an EIP with only one resource, such as an Elastic Compute Service (ECS) instance or an SAE instance. Then, the associated resource can access and be accessed by other services over the Internet.
NAT gateway: The source network address translation (SNAT) feature of a NAT gateway allows all resources in a VPC to access the Internet. An Internet NAT gateway is suitable for all resources in a VPC, whereas an EIP is suitable for only one resource in a VPC.
Scenarios and methods of SAE network access
After you deploy an application on SAE, you may have various network access requirements. The following figure shows the network access scenarios.
Mutual access between SAE applications over an internal network (non-microservices scenarios)
In serverless mode, new internal IP addresses are generated each time you deploy an application. However, you cannot access the application by using the IP addresses of the instances in the application. You can use one of the following methods to enable access:
SAE Service (CLB): You can configure SAE service access based on internal-facing Classic Load Balancer (CLB) instances. For more information, see Configure application access based on CLB instances.
SAE ServiceName: You can configure SAE application access based on Kubernetes Service names. Each SAE application has a domain name that can be accessed in the SAE environment. For more information, see Configure application access based on Kubernetes Service names.
SAE Ingress (ALB/CLB): You can configure gateway routing for application access based on internal-facing Application Load Balancer (ALB) and internal-facing CLB instances. Then, you can access different SAE applications by using different domain names and paths. For more information, see Configure gateway routing for application access based on ALB and CLB instances.
Access to SAE applications from the Internet (inbound traffic)
You can use one of the following methods to enable access:
SAE Service (CLB): You can configure SAE service access based on Internet-facing CLB instances. For more information, see Configure application access based on CLB instances.
SAE Ingress (ALB/CLB): You can configure gateway routing for application access based on Internet-facing ALB and Internet-facing CLB instances. Then, you can access different SAE applications by using different domain names and paths. For more information, see Configure gateway routing for application access based on ALB and CLB instances.
SAE EIP: If you associate each instance of an SAE application with an EIP, the instances can access and be accessed by other services over the Internet. For more information, see Enable Internet access for SAE instances based on EIPs.
Access to the Internet from SAE applications (outbound traffic)
You can use one of the following methods to enable access:
NAT Gateway: If you configure a NAT gateway for the VPC or vSwitch that is associated with an SAE application, the SAE applications that use the same VPC or vSwitch can access the Internet.
SAE EIP: If you associate each instance of an SAE application with an EIP, the instances can be accessed and access other services over the Internet. For more information, see Enable Internet access for SAE instances based on EIPs.
NAT gateway: If you configure a NAT gateway for the VPC or vSwitch that is associated with an SAE application, the SAE applications that use the same VPC or vSwitch can access the Internet. For more information, see Configure a NAT gateway for an SAE application to enable Internet access.
SAE EIP: If you associate each instance of an SAE application with an EIP, the instances can access and be accessed by other services over the Internet. For more information, see Enable Internet access for SAE instances based on EIPs.
Access to ECS instances, ApsaraDB RDS databases, and ApsaraDB for Redis databases from SAE applications in a VPC
SAE integrates with Alibaba Cloud VPCs. An SAE application can access resources in the VPC where the application resides without other configurations. For example, an SAE application in a VPC can access ECS instances, ApsaraDB RDS databases, and ApsaraDB for Redis databases that reside in the VPC.
You must check whether the related security groups and service whitelists are configured.
Access to registries from microservices applications and mutual access between instances
For more information, see Concepts and capabilities related to SAE microservices.
Comparison items in SAE networks
Differences between ServiceName and gateway routing in SAE
The gateway routing feature of SAE is implemented based on ALB and CLB instances that belong to the Server Load Balancer (SLB) family. This feature allows you to access different applications by using domain names and paths, as shown in the following figure. The ServiceName method does not provide this capability. We recommend that you use the gateway routing feature if your business requirements can be met. If you require access over Layer 4 TCP or you cannot access an application by using a domain name, you can use the ServiceName method.
For more information, see the following topics:
Differences between CLB-based application access and Kubernetes Service name-based application access
Kubernetes Services are classified into the following types: CLB-based Services and ClusterIP-based Services. SAE provides an accessible domain name for each application instead of specific ClusterIPs. The following table describes the differences between the Service types.
Item | CLB | Domain (ClusterIP) |
Fee | Free of charge | |
O&M | CLB is an independent Alibaba Cloud service that provides multiple features: monitoring, alerting, and log collection to Log Service. CLB provides fine-grained troubleshooting capabilities. | This type of Service do not provide independent monitoring, alerting, or access log collection capabilities. You need to configure alerts and logs for an application. |
Differences between ALB-based gateway routing and CLB-based gateway routing
ALB is a load balancing service that runs at the application layer, and supports protocols such as HTTP, HTTPS, and QUIC. We recommend that you use an ALB instance in gateway routing scenarios. For more information, see What is SLB?.
Differences between NAT-based Internet access and EIP-based Internet access
The following figure shows how to enable EIP-based Internet access. Each instance is associated with an EIP. If the quota of EIPs in the current Alibaba Cloud account is insufficient, EIPs fail to be created. In this case, the instances with which no EPIs are associated cannot access or be accessed by other services over the Internet.
The following table shows the differences between NAT-based Internet access and EIP-based Internet access.
Item | NAT | EIP |
Effective scope | The effective scope of a NAT gateway is a VPC or a vSwitch. An Internet NAT gateway allows all instances that are deployed in a VPC or vSwitch to access the Internet even when no public IP addresses are associated with the instances. Only one NAT gateway is required in a VPC or vSwitch. Then, all instances that reside in the VPC can access the Internet. | The effective scope of an EIP is an instance. If you have 10 instances, you must configure 10 EIPs for the instances. After you associate an EIP with an instance, the instance can access and be accessed by other services over the Internet. |
Fixed public IP address | Yes. | No. SAE releases original instances and disassociates EIPs from the original instances only if new instances are associated with EIPs. In this case, you must prepare additional EIPs for the new instances. An EIP is a pool of IP addresses. |
Scenario | NAT-based Internet access is suitable for scenarios in which auto scaling policies are configured for applications, and new instances require access to the Internet by default and require fixed IP addresses. This method can meet the business requirements of 95% SAE users. | EIP-based Internet access is suitable for scenarios in which EIPs are changeable, instances need to be directly connected (such as online conferences), and the lifecycle of each instance need to be managed in a fine-grained manner. |
Fee | For more information, see Billing of Internet NAT gateways. | For more information, see EIP billing. If the number of instances is less than or equal to 20, the EIP-based Internet access method is more cost-efficient. |
Optimized microservices
SAE is a service that combines the serverless architecture and microservices framework. SAE provides multiple microservices-based capabilities. For more information, see Concepts and capabilities related to SAE microservices.
Registry
Usage notes
SAE provides the serverless Nacos registry that allows you to quickly deploy microservices applications to SAE. The built-in Nacos registry is suitable for microservices applications that use the Nacos 1.x or 2.X client. For more information about how to deploy a microservices application to SAE, see Use the SAE built-in Nacos registry. When you use the SAE registry, take note of the following items:
If you select SAE Built-in Nacos, SAE automatically modifies the registry address of the program that you want to deploy by injecting related environment variables and using a Java agent to modify bytecode. You can deploy a program to SAE without the need to modify the program.
The SAE built-in Nacos registry is not suitable for programs that use non-Nacos registries. The related logic is controlled by your program.
This registry is suitable for quick operations or small-scale production environments. If you have more than 30 microservices application instances, we recommend that you use a self-managed Nacos registry or a Microservices Engine (MSE) Nacos registry.
Application configurations
For information about how to configure service registry and discovery, see Use the SAE built-in Nacos registry.
Configuration center
Usage notes
SAE provides the serverless Nacos configuration center. The configuration center is suitable for microservices applications that use the Nacos 1.x or 2.X client. For information about how to use the registry provided by SAE, see Use the SAE built-in Nacos registry. When you use the SAE configuration center, take note of the following items:
If you select SAE Built-in Nacos, SAE automatically modifies the configuration center address of the program that you want to deploy by injecting related environment variables and using a Java agent to modify bytecode. You can deploy a program to SAE without the need to modify the program.
The SAE built-in Nacos configuration center is not suitable for programs that use non-Nacos configuration centers. The related logic is managed by your program.
The configuration center console is provided by Alibaba Cloud Application Configuration Management (ACM). ACM is no longer available for use. However, you can still use the distributed configuration management feature of SAE. We recommend that you use MSE Nacos 2.0 to manage configurations. For more information, see Nacos edition features.
Application configurations
For information about how to configure service registry and discovery, see Use the SAE built-in Nacos registry. For information about how to manage configurations in the SAE console, see Configuration management overview.
Microservices development
IDE-based automatic deployment
You may need to upload a package each time you deploy an application in the SAE console. You can perform integrated development environment (IDE)-based deployment to improve development efficiency.
For more information, see Use Alibaba Cloud Toolkit to automatically deploy a microservices application to SAE.
Microservices governance
Services list
For applications that use the built-in Nacos registry, SAE provides the basic service query feature. If you use a self-managed Nacos registry or an MSE Nacos registry, you can log on to the corresponding service console to query services. You do not need to view the required information in the SAE console.
For more information, see Query services.
Graceful shutdown
A consumer microservices application may not receive the shutdown notification of a provider microservices application because caches exist in the consumer. In this case, you need to stop and remove the provider instances from the registry. Then, you can receive the notification only after the consumer caches are refreshed. To resolve the preceding issue, SAE integrates the graceful shutdown feature of MSE.
The graceful shutdown feature provided by MSE has advantages over the graceful shutdown solutions that are provided by open source Spring Cloud and Dubbo. However, the feature is not supported for some applications. For more information, see Configure graceful shutdown.
For information about how to configure graceful shutdown in SAE, see Configure graceful shutdown of microservices.
Graceful start
Once a provider is registered with a registry, the services of the provider can be called by a consumer in the registry. The provider may need to perform subsequent initialization, such as the initialization of a database connection pool. We recommend that you enable the graceful start feature for microservices applications that receive large amounts of traffic.
For more information, see Configure graceful start of microservices.
Application monitoring
In a microservices architecture, issues may not be identified if no monitoring system is provided. SAE integrates with Application Real-Time Monitoring Service (ARMS) to provide capabilities such as application dashboard, Java Virtual Machine (JVM) monitoring, slow call monitoring, trace analysis, and alerting. SAE provides low barriers for enterprises to use microservices architectures.
For more information, see Application monitoring.
Multiple programming languages
Supported PHP runtime environments
SAE supports the following deployment methods:
Image: This method is suitable for PHP applications that support all architectures.
PHP ZIP package: This method is suitable for all online applications that combine PHP-FPM and NGINX.
By default, SAE provides a PHP runtime environment. For more information, see PHP runtime environment.
Static file hosting
SAE allows you to use File Storage NAS and Object Storage Service (OSS) to host static files. You can persistently store the code, templates, and uploaded files during application runtime. You can also share files among instances.
Remote debugging
SAE provides multiple debugging capabilities.
PHP remote debugging
The built-in Xdebug plug-in of SAE allows you to perform remote debugging.
File download
SAE allows you to use the webshell feature to log on to instances. You can use SAE or OSS to download files. For more information, see Use the webshell feature to upload and download files.
File upload
SAE allows you to use NAS and OSS to write and debug code in a convenient manner.
Logs
SAE integrates with Simple Log Service and ApsaraMQ for Kafka to support log collection. SAE allows you to view up to 500 lines of logs. If you require more information, we recommend that you use the file log collection feature. You can specify stdout or the log path in a container as the log source. Then, SAE collects standard output (stdout) logs or business file logs from the container and sends the logs to Simple Log Service or ApsaraMQ for Kafka.
File logs
SAE integrates with Simple Log Service to support log collection. You can enable the feature in the SAE console. In ECS mode, you need to maintain the servers from which logs are collected. SAE provides the automatic collection feature for file logs. After you specify directories or files in SAE, SAE is connected to Simple Log Service each time applications are deployed or instances are scaled out. You can query logs by keyword in the Simple Log Service console. For more information, see Configure log collection to Simple Log Service.
You can specify wildcards when you specify a log source. For example, if you specify /tmp/log/*.log, all log files whose name ends with log
in the /tmp/log directory and subdirectories are collected.
If you cannot use Simple Log Service to collect logs or you cannot view logs in the Simple Log Service console as a RAM user, you can import logs to ApsaraMQ for Kafka. Then, you can deliver data from ApsaraMQ for Kafka to other persistent databases, such as Elasticsearch databases, based on your business requirements. This way, you can manage and analyze logs in a centralized manner. For more information, see Collect logs to ApsaraMQ for Kafka and Import logs from SAE to ApsaraMQ for Kafka.
You can use environment variables to configure Logtail startup parameters. For more information, see Configure environment variables to improve the collection performance of Logtail.
Real-time logs
SAE automatically collects stdout logs and retains the latest 500 logs. You can view the logs in the SAE console. For more information, see Log management.
If you want to collect stdout logs to Simple Log Service, you can export the logs as files, and then configure file collection.
Storage
SAE provides a system disk whose size is 20 GB. If you need to read data from and write data to an external storage, we recommend that you use NAS and OSS. You can use the following methods to check the health status of SAE applications: routine check and log upload. You can use OSS to upload logs. You can also use the built-in one-click upload and download feature of SAE.
We recommend that you use Simple Log Service instead of NAS or OSS in log collection and storage scenarios. For more information, see Configure log collection to Simple Log Service.
NAS
SAE allows you to use NAS to store data. This way, you can persistently store instance data and distribute data between instances. You can access a NAS file system only if the file system is attached to an ECS instance or an SAE application instance. For more information, see Configure NAS storage.
OSS
OSS provides the console and easy-to-use tools to allow you to manage buckets in a visualized manner. OSS is suitable for scenarios in which you need to perform more read operations than write operations, such as mounting configuration files or frontend static files. If you configure OSS storage settings when you deploy an application in the SAE console, you can access data in the OSS console. For more information, see Configure OSS storage.
You cannot use the ossfs tool in log writing scenarios. For more information, see ossfs overview.
File upload and download
If you need to download files from SAE to your local computer, you can use the webshell feature. For more information, see Use the webshell feature to upload and download files.
In addition to NAS or OSS, you can also use the ossutil tool to store data. For information about how to use Alibaba Cloud OSS to upload and download logs, see Perform routine checks on applications.
Monitoring and alerting
SAE provides the built-in basic monitoring feature and Application Real-Time Monitoring Service (ARMS) business monitoring feature for Java and PHP applications. The alert management module of ARMS provides features such as alert management, alert notification, and automatic escalation. You can use the module to identify issues and resolve alerts in an efficient manner.
Basic monitoring
The basic monitoring feature can be used to monitor metrics such as CPU, load, memory, disk, network, and TCP connections. For more information, see Basic monitoring. The basic monitoring feature is provided by Alibaba Cloud CloudMonitor. You can log on to the CloudMonitor console to configure custom dashboards.
Business monitoring
By default, the ARMS business monitoring feature of Basic Edition is provided. You can view the dashboards and data about JVM monitoring, QPS RT, thread pool, and trace analysis of applications. For more information, see Application details.
High availability
After you deploy an application to SAE, you can use the health check feature to identify issues of the traffic generated during graceful start and shutdown of the application. For example, you can check whether application instances and your service are running as expected. This way, you can efficiently identify runtime exceptions. SAE allows you to deploy multiple vSwitches to resolve data center-level faults. You can use Application High Availability Service (AHAS) to perform throttling and degradation for Java applications to ensure the availability of the applications.
Multi-vSwitch deployment
To prevent data center-level faults, we recommend that you specify multiple vSwitches for SAE applications in your production environment. You can specify multiple vSwitches when you create an application or add a vSwitch after an application is created. When you create a vSwitch, we recommend that you specify more than 100 IP addresses. If the IP addresses are insufficient, the vSwitch may fail to be created or auto scaling may fail to be performed. For more information, see Change a vSwitch.
Select multiple vSwitches when you create an application.
Add a vSwitch after an application is created.
NoteWhen you add a vSwitch for an application, make sure that you add the vSwitch to the database whitelist. For more information, see Enable access from applications to Alibaba Cloud database instances.
Graceful start and shutdown
In most cases, instances are added and then removed when you deploy an application on SAE. You need to perform the following operations to ensure the graceful start and shutdown of an application:
Check whether traffic can be forwarded to an instance that is newly added.
Perform graceful destruction on old instances.
SAE provides the following health check methods based on Kubernetes: application instance liveness check and application business readiness check (readiness configuration). To resolve the preceding issues, you can create a readiness configuration in SAE. The readiness probe periodically checks whether an instance is ready after the instance is newly added by a scale-out operation. SAE can receive traffic only if the new instance is ready. If a check fails, no traffic is ingested by SAE. Before SAE destroys an old instance, SAE removes the instance from the traffic source. You can configure the shutdown script and specify the waiting period before SAE destroys the instance. For more information, see Configure health checks.
If you create a liveness configuration, the liveness probe periodically checks the liveness of containers. If a check fails, SAE automatically restarts a container. If exceptions occur, you can use the liveness check feature to perform automatic O&M. You cannot identify the failure cause because the data in a container is lost after the container is restarted. You can use the liveness check feature based on your business scenario.
When you create a readiness configuration or a liveness configuration, you must configure graceful shutdown for microservices due to the cache in registries in microservice scenarios. For more information, see Configure graceful shutdown of microservices. In the production environment, services may be unavailable for a short period of time and a large number of service monitoring errors may occur when the auto scaling feature and upgrade and rollback feature are used. In this case, you must configure graceful start for microservices in SAE. For more information, see Configure graceful start of microservices.
Elasticity and cost-effectiveness
SAE allows you to manually scale in and scale out applications, configure auto scaling policies, and configure scheduled application start and stop rules. SAE supports the following types of auto scaling policies: scheduled auto scaling policy, metric-based auto scaling policy, and hybrid auto scaling policy. Elasticity is a common characteristic of cloud native architectures and applications. You can configure elasticity-related settings to reduce machine costs and improve O&M efficiency.
Manual scaling
Manual scaling is suitable for manual O&M scenarios. Compared with ECS, manual scaling in SAE is based on container images. This way, you can quickly perform application scale-in and scale-out. For more information, see Manually scale instances.
Scheduled scaling
Scheduled scaling is suitable for scenarios in which traffic can be predicted. For example, in the catering and education industries, peak hours exist every morning and evening. In this case, you can specify the number of instances to run in different periods of time to make sure that the server resources match the actual business traffic. For more information, see Configure an auto scaling policy.
Metric-based scaling
Metric-based scaling is suitable for scenarios in which traffic cannot be predicted. Metrics such as CPU, memory, TCP connections, QPS, and RT are supported. For more information, see Configure an auto scaling policy.
Hybrid scaling
Hybrid scaling is suitable for scenarios in which burst traffic and periodic traffic occur at the same time. Hybrid auto scaling policies are commonly used in industries such as Internet, education, and catering. You can specify the number of instances to run for specific periods of time in a fine-grained manner.
For example, if the maximum number of instances is set to max
and the minimum number of instances is set to min
on weekdays, and the number of instances does not need to be equal to min
on weekends, you can specify a different value that is smaller than min
for the number of instances on weekends. For more information, see Configure an auto scaling policy.
Scheduled start and stop
You can use the scheduled start and stop feature to start and stop applications at specific points in time by namespace. For example, you can start and stop all applications in the development environment or test environment at a specified point in time. In this example, an application that can be deployed in the development environment or test environment is used from 08:00 to 20:00 each day, and is idle for the remaining time periods in the day. In this case, you can configure a scheduled start and stop rule in SAE to reduce costs. For more information, see Manage a scheduled start and stop rule.
Best practices
SAE provides best practices to help you meet various business requirements. The preceding sections describe the settings that can be configured, such as elasticity, network, storage, and access control for Alibaba Cloud databases. SAE also allows you to configure images, application acceleration, and JVM parameters. For more information, see SAE best practices.