By Alwyn Botha, Alibaba Cloud Community Blog author.
In this tutorial, we're going to use jobs as hooks because doing so will help us provide start and stop details, which are helpful as this will show where in the chart life cycle these hooks get executed.
This article is part of a three-part tutorial series. You can check out the first two articles of this tutorial series here: Helm Chart and Template Basics - Part 1 and Helm Charts and Template Basics - Part 2.
But, before we get into things, it's probably important to first explain what exactly a hook is. Well, the Helm documentation does a pretty good job at this:
Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release's life cycle.
For example, you can use hooks to:
Now, in the remainder of this tutorial, we will first look at the pre-install and post-install for both Pods and Jobs, and then look into hook weights and hooks for custom resource definitions, and last provide chart tests. All of this will help us understand the chart life cycle.
There are nine places where you may place a hook in a release's life cycle. To educate ourselves, we will only focus on just these following two places:
The first thing we need is a chart. This time we will use nginx. As the first step, you'll want to create a chart directory structure and content by running the following command.
helm create nginx-helm
Now, we'll need two templates for the hooks. Templates for hooks are created in a templates directory in just the same way that all other templates are created. Hooks are similar to other templates, but they have helm.sh/hook
annotation that declares them as hooks.
Hooks are declared as an annotation in the metadata section of a manifest.
Now, let's see how the pre-install and post-install hooks are declared below. To start, create these two YAML files in the .\nginx-helm\templates
directory. You can do so with the scripts below:
nano my-pre-install-HookPod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pre-install-hook-pod
annotations:
"helm.sh/hook": "pre-install"
spec:
containers:
- name: hook1-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo The pre-install hook Pod is running && sleep 10']
restartPolicy: Never
terminationGracePeriodSeconds: 0
nano my-post-install-HookPod.yaml
apiVersion: v1
kind: Pod
metadata:
name: post-install-hook-pod
annotations:
"helm.sh/hook": "post-install"
spec:
containers:
- name: hook1-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo post-install hook Pod is running && sleep 10']
restartPolicy: Never
terminationGracePeriodSeconds: 0
Now we're ready to do our first helm install that uses hooks. You can use the command below to do that.
helm install .\nginx-helm\ --name mynginx1
Next, we want to investigate the start and end times of those hooks. Note that the status
command below shows our nginx Pod, as well as its service and deployment information, but it doesn't show the hook Pods.
helm status mynginx1
LAST DEPLOYED: Tue Feb 19 08:45:49 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mynginx1-nginx-helm ClusterIP 10.97.193.185 <none> 80/TCP 8s
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mynginx1-nginx-helm 1 1 1 0 8s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mynginx1-nginx-helm-5f54866fc4-lnvx7 0/1 Running 0 8s
If we run the get Pods
command, we'll see the two hook Pods, which, unlike the status
command, doesn't show it. After a hook did its work, it's no longer linked to its parent release. This is proof of that. When we delete a release, we have to delete the output of its hooks independently.
kubectl get pods
NAME READY STATUS RESTARTS AGE
mynginx1-nginx-helm-5f54866fc4-lnvx7 1/1 Running 0 115s
post-install-hook-pod 0/1 Completed 0 115s
pre-install-hook-pod 0/1 Completed 0 115s
Next, the Started:
and Finished:
times are given below:
PS C:\k8> kubectl describe pod/pre-install-hook-pod | grep -E 'Anno|Started:|Finished:'
Annotations: helm.sh/hook: pre-install
Started: Tue, 19 Feb 2019 08:45:50 +0200
Finished: Tue, 19 Feb 2019 08:46:00 +0200
PS C:\k8> kubectl describe pod/post-install-hook-pod | grep -E 'Anno|Started:|Finished:'
Annotations: helm.sh/hook: post-install
Started: Tue, 19 Feb 2019 08:45:50 +0200
Finished: Tue, 19 Feb 2019 08:46:00 +0200
PS C:\k8> kubectl describe pod/mynginx1-nginx-helm-5f54866fc4-lnvx7 | grep -E 'Anno|Started:|Finished:'
Annotations: <none>
Started: Tue, 19 Feb 2019 08:45:50 +0200
Again, we did this because we wanted to confirm we understand this sequence involved with pre-install and post-install. In reality, the sequence should have been:
However, from what we can see, they all seem to start at the exact same time, so it's hard to tell the exact order. This is, in part, because Kubernetes doesn't show millisecond resolution start and end times. If this wasn't the case, we would have been able to see the correct sequence.
It is a feature of hooks that they do not block subsequent templates. Only job hooks block, so what happened here was the following situation:
So, as you can see, we need to use jobs to test and prove that the sequence we're assuming here is actually correct. An important lesson here is that hooks do not block, so we cannot solely depend on their output in the software that is deployed in your release. Therefore, you'll want to use jobs to block the running of your main software to until the job is finished.
In other words, we saw this non-blocking situation in action here. As the next leg of this tutorial, we will demonstrate job blocking. So to start things off, let finish off this release by running the following delete
and release
commands:
helm delete mynginx1
release "mynginx1" deleted
Now you can delete its hook Pods independently with the following delete
commands:
kubectl delete pod/pre-install-hook-pod
pod "pre-install-hook-pod" deleted
kubectl delete pod/post-install-hook-pod
pod "post-install-hook-pod" deleted
Now, in this section, we will now investigate the blocking nature involved in using Kubernetes jobs as hooks. What all of this fancy mumble jumble just means is that we will be replacing our Pods above with some Kubernetes jobs.
To do this, you'll need to delete my-pre-install-HookPod.yaml
and my-post-install-HookPod.yaml
from the .\nginx-helm\templates
directory. Then, create these two YAML files in the .\nginx-helm\templates
directory
Note the only work these two jobs do is sleep. Normally, what you would do is pre-install and post-install specific tasks in these jobs. So, in particular, following this standard methodology, we will do the following:
Of course, this is just another method that we can use to check that the right hook has got executed at the right place in the release deployment.
nano my-pre-install-job-hook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job
annotations:
"helm.sh/hook": "pre-install"
spec:
template:
spec:
containers:
- name: pre-install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo pre-install Job Pod is Running ; sleep 5']
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
nano my-post-install-job-hook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: post-install-job
annotations:
"helm.sh/hook": "post-install"
spec:
template:
spec:
containers:
- name: post-install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo post-install Pod is Running ; sleep 10']
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
Now, you'll want to do the installation part. If everything works correctly, the command will take around 15 seconds. And if it does, this is a good thing because it means the two hook jobs are running sequentially and that they are blocking. Below is the install
command that you'll need to run for this.
helm install .\nginx-helm\ --name mynginx2
Now, let's investigate what happened.
kubectl describe pod/pre-install-job-lb4jz | grep -E 'Anno|Started:|Finished:'
Annotations: <none>
Started: Mon, 18 Feb 2019 11:27:50 +0200
Finished: Mon, 18 Feb 2019 11:27:55 +0200
kubectl describe pod/nginx13-nginx-helm-75b5fb8c8c-hhbnq | grep -E 'Anno|Started:|Finished:'
Annotations: <none>
Started: Mon, 18 Feb 2019 11:27:56 +0200
kubectl describe pod/post-install-job-27srg | grep -E 'Anno|Started:|Finished:'
Annotations: <none>
Started: Mon, 18 Feb 2019 11:27:56 +0200
Finished: Mon, 18 Feb 2019 11:28:06 +0200
From the above output, we know the following things:
So, from the above investigation, we've proved that the sequence described at the beginning of this blog is indeed correct. More specifically, what we discovered was that:
Strangely, nginx (and Apache) starts in milliseconds, so you cannot really see that nginx blocks the post-install hook, but it does indeed do that. If you have a more complex or slow Pod, you'll see its creation blocks the post-install hook.
Now that's the demo is done. Let's run the delete
and release
commands.
helm delete mynginx2
release "mynginx2" deleted
List the jobs with the get
command:
kubectl get jobs
NAME COMPLETIONS DURATION AGE
post-install-job 1/1 11s 109s
pre-install-job 1/1 6s 115s
And now let's independently delete stuff:
kubectl delete job/pre-install-job
job.batch "pre-install-job" deleted
kubectl delete job/post-install-job
job.batch "post-install-job" deleted
Also, you'll want to delete my-pre-install-job-hook.yaml
and my-post-install-job-hook.yaml
from the .\nginx-helm\templates
directory.
We can use hook weights to specify in what sequence we want the hooks to be run. Negative weights are higher priority, just like the Linux nice command. Below we have three job hooks:
This will demonstrate three things:
Overall, these hooks will block the nginx Pod from starting until all three are completed successfully.
nano my-pre-install-job-hook-Job-2.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job-2
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "-2"
spec:
template:
spec:
containers:
- name: pre-install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo pre-install Job Pod is Running ; sleep 2']
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
nano my-pre-install-job-hook-Job3.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job3
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "3"
spec:
template:
spec:
containers:
- name: pre-install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo pre-install Job Pod is Running ; sleep 3']
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
nano my-pre-install-job-hook-Job5.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job5
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "5"
spec:
template:
spec:
containers:
- name: pre-install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo pre-install Job Pod is Running ; sleep 5']
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
Now, you'll want to install this new release:
helm install .\nginx-helm\ --name mynginx3
As before, if this command takes 10 seconds, then everything's fine. The command below does not really show the running times accurately.
kubectl get jobs
NAME COMPLETIONS DURATION AGE
pre-install-job-2 1/1 5s 2m39s
pre-install-job3 1/1 4s 2m34s
pre-install-job5 1/1 7s 2m30s
Below we can see that all three pre-install jobs are completed, and now only nginx is running.
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx16-nginx-helm-699f69f6dd-shkf5 1/1 Running 0 2m23s
pre-install-job-2-g8jqf 0/1 Completed 0 2m39s
pre-install-job3-tncf2 0/1 Completed 0 2m34s
pre-install-job5-pccz6 0/1 Completed 0 2m30s
So, the question remains: did the hook jobs do the following:
Well, let's look at the following code.
kubectl describe pod/pre-install-job-2-g8jqf | grep -E 'Anno|Started:|Finished:'
Annotations: <none>
Started: Mon, 18 Feb 2019 12:57:35 +0200
Finished: Mon, 18 Feb 2019 12:57:37 +0200
kubectl describe pod/pre-install-job3-tncf2 | grep -E 'Anno|Started:|Finished:'
Annotations: <none>
Started: Mon, 18 Feb 2019 12:57:39 +0200
Finished: Mon, 18 Feb 2019 12:57:42 +0200
kubectl describe pod/pre-install-job5-pccz6 | grep -E 'Anno|Started:|Finished:'
Annotations: <none>
Started: Mon, 18 Feb 2019 12:57:43 +0200
Finished: Mon, 18 Feb 2019 12:57:48 +0200
kubectl describe pod/nginx16-nginx-helm-699f69f6dd-shkf5 | grep -E 'Anno|Started:|Finished:'
Annotations: <none>
Started: Mon, 18 Feb 2019 12:57:50 +0200
The answer to the above question is yes. And, just to be clear, it's yes to all four parts. Now, investigate all the start and finish times above and systematically answer the questions yourself. For this, you'll want to note the correct and accurate execution time for each job.
Since you now understand and have experience with hook weight sequences, we can go ahead and delete the demo.
helm delete mynginx3
release "mynginx3" deleted
Also, you'll want to delete this stuff, too:
kubectl delete job/pre-install-job-2
kubectl delete job/pre-install-job3
kubectl delete job/pre-install-job5
job.batch "pre-install-job5" deleted
job.batch "pre-install-job3" deleted
job.batch "pre-install-job-2" deleted
Before we get too much into things, let's go over some basic concepts first. The following explanation is from these following sources.
Custom Resource Definitions (CRDs) are a special kind in Kubernetes. They provide a way to define other kinds. On occasion, a chart needs to both define a kind and then use it. This is done with the crd-install hook. The crd-install hook is executed very early during an installation, before the rest of the manifests are verified. CRDs can be annotated with this hook so that they are installed before any instances of that CRD are referenced. In this way, when verification happens later, the CRDs will be available.
Below are the two templates needed to demonstrate the above.
nano demo-crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: democrds.demogroup.com
annotations:
"helm.sh/hook": crd-install
spec:
group: demogroup.com
version: v1
scope: Namespaced
names:
plural: democrds
singular: democrd
kind: Demo-Crd
shortNames:
- democrd
nano demo-crd.yaml
apiVersion: demogroup.com/v1
kind: Demo-Crd
metadata:
name: mydemo-cred-test
You can add the relevant details to your template directory and do a release install to investigate if you understand the theory above. Then, when you are done, you'll want to delete what needs to be deleted.
You can add test templates to your chart. These templates can test any of the functions of original chart, which author of that chart thinks is necessary.
The .\nginx-helm\templates\test\
directory contains test-connection.yaml
, shown below. Take special note of the last four lines. It uses BusyBox and Wget to test that nginx can be reached at its service port.
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "nginx-helm.fullname" . }}-test-connection"
labels:
app.kubernetes.io/name: {{ include "nginx-helm.name" . }}
helm.sh/chart: {{ include "nginx-helm.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "nginx-helm.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never
Install a new release so that we can investigate how this test works by using the following command.
helm install .\nginx-helm\ --name mynginx4
Next, to run the test, use the following command:
helm test mynginx4
RUNNING: mynginx4-nginx-helm-test-connection
PASSED: mynginx4-nginx-helm-test-connection
It will take several seconds to run. This is because you the template above does not contain imagePullPolicy: IfNotPresent
, so it needs to fetch BusyBox from the Internet.
So, very simply, the test works. If you run this again during the day, you will get this following error:
helm test mynginx4
RUNNING: mynginx4-nginx-helm-test-connection
ERROR: pods "mynginx4-nginx-helm-test-connection" already exists
Error: 1 test(s) failed
To fix this we need to add "helm.sh/hook-delete-policy": hook-succeeded
to the test YAML file /manifest
below.
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "nginx-helm.fullname" . }}-test-connection"
labels:
app.kubernetes.io/name: {{ include "nginx-helm.name" . }}
helm.sh/chart: {{ include "nginx-helm.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
"helm.sh/hook": test-success
"helm.sh/hook-delete-policy": hook-succeeded
spec:
containers:
- name: wget
image: busybox
imagePullPolicy: IfNotPresent
command: ['wget']
args: ['{{ include "nginx-helm.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never
Make the change above. Note that this doesn't actually get rid of the existing test connection resources. Then, to delete it, you'll want to use the following command:
kubectl delete pods/mynginx4-nginx-helm-test-connection
pod "mynginx4-nginx-helm-test-connection" deleted
Next, we can delete this release with the following command:
helm delete mynginx4
release "mynginx4" deleted
You'll want to install a new release with changes above:
helm install .\nginx-helm\ --name mynginx5
Now, run the test again:
helm test mynginx5
RUNNING: mynginx5-nginx-helm-test-connection
PASSED: mynginx5-nginx-helm-test-connection
This time the test runs successfully. This is because it uses the busybox installation which is already on the node. If you run get Pods
the test connection Pod still exists. Notably, "helm.sh/hook-delete-policy": hook-succeeded
does not work on these tests.
kubectl get pods
NAME READY STATUS RESTARTS AGE
mynginx8-nginx-helm-8569ffd5f7-v8j25 1/1 Running 0 21s
mynginx8-nginx-helm-test-connection 0/1 Completed 0 14s
You have to delete it manually:
kubectl delete pods/mynginx5-nginx-helm-test-connection
pod "mynginx5-nginx-helm-test-connection" deleted
We can also delete this release
helm delete mynginx5
release "mynginx5" deleted
Through all of the instruction throughout this particular tutorial, part three of a three part series, you should now be able to use all the different hooks as well as define as many tests as you need for your Helm charts.
Again, this article concludes my three part series on Helm charts and templates. Based on just these three tutorials, I hope that you have come to have a strong, solid understanding of Helm.
You can read more from Helm's official documentation.
From Confused to Proficient - Principle of Kubernetes Cluster Scaling
2,599 posts | 762 followers
FollowAlibaba Clouder - October 29, 2019
Alibaba Clouder - October 29, 2019
Alibaba Cloud Native Community - February 9, 2023
Alibaba Developer - June 21, 2021
OpenAnolis - January 10, 2023
Alibaba Cloud Native Community - December 29, 2023
2,599 posts | 762 followers
FollowAlibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.
Learn MoreA secure image hosting platform providing containerized image lifecycle management
Learn MoreElastic and secure virtual cloud servers to cater all your cloud hosting needs.
Learn MoreMore Posts by Alibaba Clouder