This topic provides answers to some frequently asked questions about applications in Container Service for Kubernetes (ACK) clusters.
What do I do if image pulling is time-consuming or even fails?
How do I troubleshoot precheck failures before upgrading the CCM?
How do I create containers from private images in an ACK cluster?
Troubleshoot failures to bind a source code repository in Container Registry
Troubleshoot repository creation failures in Container Registry
How do I perform rolling updates for applications without service interruptions?
What do I do if image pulling is time-consuming or even fails?
To troubleshoot this issue, perform the following steps:
If you pull images from a repository of Container Registry, check whether the username and password that you use are valid. We recommend that you use the aliyun-acr-credential-helper component to pull images from Container Registry without a password. For more information, see Use the aliyun-acr-credential-helper component to pull images without using a secret.
Check whether the client can access the Internet. If the client cannot access the Internet, you must configure Internet access for the client.
How do I troubleshoot application issues in ACK?
In most cases, application issues in ACK arise from pods, controllers (Deployments, StatefulSets, or DaemonSets), and Services. Check whether the following types of issues exist:
Pod issues
For more information about how to troubleshoot pod issues, see Pod troubleshooting.
Controller issues
Pod issues may arise when you create controllers such as Deployments, DaemonSets, StatefulSets, and Jobs. For more information, see Pod troubleshooting.
You can check the events and logs of controllers, such as Deployments, to troubleshoot pod issues in controllers.
The following steps show how to check the events and logs of a Deployment. You can perform similar steps to check the events and logs of DaemonSets, StatefulSets, or Jobs.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane of the cluster details page, choose .On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane of the cluster details page, choose .
On the Deployments page, click the name of the Deployment that you want to view. Click the Events and Logs tabs to view the events and logs of the Deployment.
If issues arise when you create a StatefulSet, see Forced Rollback.
Service issues
A Service provides load balancing across a set of pods. The following steps show how to identify the issues in a Service:
Check the endpoints of the Service.
Log on to a master node of the cluster to which the Service belongs. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the endpoints of the Service.
Replace
[$Service_Name]
with the actual Service name.kubectl get endpoints [$Service_Name]
Check whether the number of endpoints is the same as the number of backend pods. For example, if a Service is used to expose an application deployed by a Deployment that provisions three pods, the Service has three endpoints.
Missing Service endpoints
If the endpoints of a Service are missing, query the selectors of the Service and then use the selector to check whether backend pods are associated with the Service. Perform the following steps:
The following figure shows a sample YAML file.
Run the following command to query pods that match the selector:
kubectl get pods --selector=app=[$App] -n [$Namespace]
NoteReplace
[$App]
with the name of the backend pod.Replace
[$Namespace]
with the namespace of the Service. If the Service belongs to the default namespace, you can leave this parameter empty.
If your application pod is returned, the Service may use a wrong port. If the Service listens on a port that is not exposed for the backend pod, the pod is not added to the endpoints of the Service. Run the following command to check whether the pod can be accessed by using the Service port.
curl [$IP]:[$Port]
NoteReplace
[$IP]
with the cluster IP address specified in the YAML file in Step 1.Replace
[$Port]
with the port specified in the YAML file in Step 1.The test method varies based on the actual environment.
Traffic forwarding errors
If your client can access a Service and the IP addresses in the endpoints of the Service are valid, but your client is disconnected from the Service immediately after you connect to the Service. The cause may be that your requests failed to be forwarded to the backend pods. Perform the following steps to troubleshoot this issue:
Check whether the backend pods run as expected.
Identify pod issues. For more information, see Pod troubleshooting.
Check whether the IP addresses of the backend pods are accessible.
Run the following command to query the IP addresses of the backend pods:
kubectl get pods -o wide
Log on to a node and run the ping command to check whether the IP addresses of the pods are accessible.
Check whether the Service listens on the port exposed for the backend application.
If port 80 is exposed for your application, you must specify port 80 as the listening port of the Service. Log on to a node and run the
curl [$IP]:[$Port]
command to check whether the pod can be accessed by using the Service port.
How do I manually update Helm?
Log on to your Kubernetes cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to install Tiller.
The address of the Tiller image can include the endpoint of the virtual private cloud (VPC) where your cluster is deployed. For example, if your cluster is deployed in the China (Hangzhou) region, you can specify the following image address:
registry-vpc.cn-hangzhou.aliyuncs.com/acs/tiller:v2.11.0
.helm init --tiller-image registry.cn-hangzhou.aliyuncs.com/acs/tiller:v2.11.0 --upgrade
After the tiller health check succeeds, you can run the
helm version
command to view the update results.NoteThe preceding command updates only Tiller, the server-side component of Helm. To update the client-side component, download the required client binary.
Download the latest client version supported by Alibaba Cloud, which is Helm client 2.11.0.
After the server-side component and client-side component of Helm are updated, run the following command to check their versions:
helm version
Expected output:
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b****", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b****", GitTreeState:"clean"}
How do I pull private images?
Run the following command to create a Secret:
kubectl create secret docker-registry regsecret --docker-server=registry-internal.cn-hangzhou.aliyuncs.com --docker-username=abc****@aliyun.com --docker-password=**** --docker-email=abc****@aliyun.com
regsecret
: the key of the Secret. You can enter a custom key.--docker-server
: the address of the Docker registry.--docker-username
: the username of the Docker registry.--docker-password
: the password of the Docker registry.Optional:
--docker-email
: the email address.
You can pull the private image by using one of the following methods:
Manually pull the private image
Add the Secret configurations to a YAML file.
containers: - name: foo image: registry-internal.cn-hangzhou.aliyuncs.com/abc/test:1.0 imagePullSecrets: - name: regsecret
NoteThe
imagePullSecrets
parameter specifies the Secret that is used to pull the image.You must specify the name of the Secret you created. In this example, the name of the Secret is
regsecret
.The Docker registry specified by the
image
parameter must be the same as the one that is specified by--docker-server
.
For more information, see Use a private registry.
Automatically pull the private image without a Secret
NoteTo eliminate the need to use the Secret each time you pull the private image, you can add the Secret to the default service account of the namespace that you use. For more information, see Add ImagePullSecrets to a service account.
Run the following command to view the Secret that is used to pull the private image:
kubectl get secret regsecret
Expected output:
NAME TYPE DATA AGE regsecret kubernetes.io/dockerconfigjson 1 13m
In this example, the image pulling Secret is added to the default service account of the namespace that you use.
Create a file named sa.yaml and add the configurations of the default service account to the file.
kubectl get serviceaccounts default -o yaml > ./sa.yaml
Run the following command to query the configurations of the sa.yaml file:
cat sa.yaml
Expected output:
apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: 2015-08-07T22:02:39Z name: default namespace: default resourceVersion: "243024" # Take note of the self link: /api/v1/namespaces/default/serviceaccounts/default. uid: 052fb0f4-3d50-11e5-b066-42010af0**** secrets: - name: default-token-uudgeoken-uudge
Run the
vim sa.yaml
command to open the sa.yaml file. Then, delete the resourceVersion parameter and add the imagePullSecrets parameter to specify the image pulling Secret: Modify the file based on the following content:apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: 2015-08-07T22:02:39Z name: default namespace: default selfLink: /api/v1/namespaces/default/serviceaccounts/default uid: 052fb0f4-3d50-11e5-b066-42010af0**** secrets: - name: default-token-uudge imagePullSecrets: # Add this parameter. - name: regsecret
Run the following command to replace the configurations of the default service account with the configurations of the sa.yaml file:
kubectl replace serviceaccount default -f ./sa.yaml
Expected output:
serviceaccount "default" replaced
Create a Tomcat application.
Create a file named tomcat.yaml and copy the following content to the file:
apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: registry-internal.cn-hangzhou.aliyuncs.com/abc/test:1.0 # Replace the value with the address of your private image. - containerPort: 8080
Run the following command to create a Tomcat application:
kubectl create -f tomcat.yaml
After the pod is started, run the following command to query the pod configurations:
kubectl get pod tomcat-**** -o yaml
Expected output:
spec: imagePullSecrets: - nameregsecretey
How do I pull images from a Container Registry Enterprise Edition instance that is deployed in a region inside the Chinese mainland to an ACK cluster that is deployed in a region outside the Chinese mainland?
You must first purchase a Container Registry Enterprise Edition instance of Standard or Advanced Edition in the region in the Chinese mainland. Then, purchase a Container Registry Enterprise Edition instance of Basic Edition in the region outside the Chinese mainland.
After you complete the purchase, you can synchronize images from the Container Registry instance in the Chinese mainland to the Container Registry instance outside the Chinese mainland. For more information, see Replicate images within same account. Then, obtain the address of the image that you want to pull from the Container Registry instance outside the Chinese mainland. This way, you can pull the image to your ACK cluster and use the image to deploy an application.
If you use Container Registry Personal Edition, the image synchronization process is time-consuming. If you use a self-managed image repository, you must purchase a Global Accelerator (GA) instance.
We recommend that you use Container Registry Enterprise Edition because the total cost of a self-managed repository and a GA instance is higher than that of Container Registry Enterprise Edition.
For more information about the billing of Container Registry Enterprise Edition, see Billing rules.
How do I perform rolling updates for applications without service interruptions?
After the old application version is deleted, 5XX
errors persist for a short period of time when you deploy the new application version. 5XX
errors occur because it requires several seconds to synchronize pod updates to Classic Load Balancer (CLB) instances during a rolling update. To resolve this issue, you can configure connection draining. This way, you can perform rolling updates for applications without service interruptions.
How do I obtain images?
You can use Container Registry to build and pull images. For more information, see Manage images.
How do I restart a container?
You cannot separately restart individual containers. To restart a container, perform the following steps.
Run the following command to query the status of pods in your cluster. You can select a pod from the output.
kubectl get pods
Delete the pod that you select. Then, the corresponding controller, such as a Deployment or DaemonSet, automatically creates a new pod. This way, the containers in the pod are restarted. Run the following command to delete a pod:
kubectl delete pod <pod-name>
After you delete the pod, the corresponding controller automatically creates a new pod.
NoteIf you want to manage or update containers in pods in the production environment, we recommend that you use controllers, such as ReplicaSets and Deployments. To ensure a consistent and normal cluster state, we recommend that you do not perform manual operations on pods.
Run the following command to check whether the new pod is in the Running state:
kubectl get pods
How do I change the namespace of a Deployment?
When you migrate a Deployment from one namespace to another namespace, you need to change the namespace to which the Deployment belongs. In this case, you also need to manually change the namespace of the persistent volumes (PVs), ConfigMaps, Secrets, and other dependencies used by the Deployment.
Run the
kubectl get
command to query the YAML template of the Deployment:kubectl get deploy <deployment-name> -n <old-namespace> -o yaml > deployment.yaml
Modify the deployment.yaml file by changing the value of the
namespace
parameter based on your requirements. Save the change and exit.apiVersion: apps/v1 kind: Deployment metadata: annotations: generation: 1 labels: app: nginx name: nginx-deployment namespace: new-namespace # Specify the new namespace. ... ...
Run the
kubectl apply
command to deploy the Deployment in the new namespace:kubectl apply -f deployment.yaml
Run the
kubectl get
command to query the Deployment in the new namespace:kubectl get deploy -n new-namespace
How do I expose pod information to running containers?
ACK is fully compatible with open source Kubernetes and complies with the specifications of open source. There are two ways to expose pod information to a running container:
Environment variables: You can expose pod information to containers by setting environment variables.
Volume files: Mount pod information to the container in the form of files.