By Alwyn Botha, Alibaba Cloud Community Blog author.
App management is a challenging aspect of Kubernetes, and the Helm project greatly simplifies this by providing a uniform software packaging method which supports version control. Helm installs and manages packages (called charts in Helm) for Kubernetes, just as yum and apt do.
In this tutorial we are going to let Helm create a basic chart for us. This tutorial assumes you have at least a beginner understanding of what Helm is. If you are not familiar with it, I suggest you to go through this guide before proceeding with the article: https://www.alibabacloud.com/help/doc-detail/86511.htm
We will then gradually make changes to learn how the values file and the template parts work together.
It is easier to work with such a basic working chart, than starting from nothing.
From : https://docs.helm.sh/using_helm/
A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
Think of it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file
helm create myhelm1
Creating myhelm1
This creates a complete working chart with all its required files in the myhelm directory.
myhelm1/
|
|- .helmignore # Contains patterns for files to ignore when packaging Helm charts.
|
|- Chart.yaml # Meta Information about your chart
|
|- values.yaml # The default values for your templates
|
|- charts/ # Charts that this chart depends on: dependencies
|
|- templates/ # The template files
You will be exposed to some of these files throughout this tutorial - only when we need to learn about those specific files.
The purpose here is to use a chart as soon as possible to create a running instance. Then we investigate what the chart created and how it did it.
First up is the values.yaml file. It contains our default values for the Kubernetes objects we want to create.
Near the top we see it uses nginx. That is a 55 MB download. I prefer quick actions during tutorials by using busybox - a 650 KB download.
Original values.yaml
nano ./myhelm1/values.yaml
# Default values for myhelm1.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: nginx
tag: stable
pullPolicy: IfNotPresent
Change values.yaml at the top to use busybox as shown below. Note tag changes as well.
nano ./myhelm1/values.yaml
# Default values for myhelm1.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: radial/busyboxplus
tag: base
pullPolicy: IfNotPresent
Next is the deployment.yaml file.
This is a deployment like any other you are used to use in Kubernetes. The major difference is that it gets most of its field values from the values file we just edited.
Edit your deployment.yaml file around line 27 ... add the command. We are using the busybox image. If we create our Pods they will just exit immediately since there is no command or program running. The command let our busybox Pod sleep for 60 seconds.
( You can see in template extract below how values from values.yaml will be pulled in. We will get to the syntax - we focus on big picture for now. )
nano ./myhelm1/templates/deployment.yaml
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ['sh', '-c', 'sleep 60']
Now we are ready to let Helm install our edited chart.
Run helm install ./myhelm1/ and investigate the output.
helm install ./myhelm1/
NAME: loopy-otter
LAST DEPLOYED: Thu Feb 14 08:48:42 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
loopy-otter-myhelm1 ClusterIP 10.109.163.87 <none> 80/TCP 0s
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
loopy-otter-myhelm1 1 0 0 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
loopy-otter-myhelm1-67b67bf4c8-tsdcq 0/1 Pending 0 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=myhelm1,app.kubernetes.io/instance=loopy-otter" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
Helm auto generates a release name for your: NAME: loopy-otter
Yours will be different. I hate those silly names. We will use our own names later.
We see a service, deployment and Pod got created.
Roughly speaking Helm read all the .yaml templates in the templates directory, then interpreted these templates by pulling in values from the values.yaml file.
The notes are for the original nginx application. It is totally wrong for our busybox application.
Those notes are from NOTES.txt, another template file.
A few seconds later we will see our Pod running.
kubectl get pods
NAME READY STATUS RESTARTS AGE
loopy-otter-myhelm1-67b67bf4c8-tsdcq 0/1 Running 0 13s
The big-picture overview demo is done. Use helm delete to delete our first release.
From : https://docs.helm.sh/using_helm/
A Release is an instance of a chart running in a Kubernetes cluster.
helm delete loopy-otter
release "loopy-otter" deleted
Now edit the .helmignore file and add NOTES.txt to the bottom.
.helmignore contains a list of filenames and filename patterns we want Helm to ignore.
nano ./myhelm1/.helmignore
NOTES.txt
If you run the install again you will see those notes are no longer shown. ( Later we will use such notes, but here and now that file is of no use to us. )
helm install .\myhelm1\ --name test1
NAME: test1
LAST DEPLOYED: Thu Feb 14 08:56:10 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test1-myhelm1 ClusterIP 10.96.102.116 <none> 80/TCP 0s
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
test1-myhelm1 1 0 0 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
test1-myhelm1-6f77bf4459-9nxpz 0/1 ContainerCreating 0 0s
Delete our test 1 release.
helm delete test1
release "test1" deleted
We use --dry-run and --debug to investigate how Helm interprets our template and YAML files in our charts.
This way we do not pollute our Kubernetes node with several unneeded objects.
Let's try it.
helm install .\myhelm1\ --name test1 --dry-run --debug
[debug] Created tunnel using local port: '49958'
[debug] SERVER: "127.0.0.1:49958"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
Error: a release named test1 already exists.
Run: helm ls --all test1; to check the status of the release
Or run: helm del --purge test1; to delete it
As you can see a release may exist only once.
Check the status of the release ( as they suggest )
helm ls --all test1
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
test1 1 Thu Feb 14 08:56:10 2019 DELETED myhelm1-0.1.0 1.0 default
We just deleted it.
To test debug we need another release name: we use test2:
helm install .\myhelm1\ --name test2 --dry-run --debug
[debug] Created tunnel using local port: '49970'
[debug] SERVER: "127.0.0.1:49970"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
NAME: test2
REVISION: 1
RELEASED: Thu Feb 14 08:59:22 2019
CHART: myhelm1-0.1.0
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: radial/busyboxplus
tag: base
ingress:
annotations: {}
enabled: false
hosts:
- chart-example.local
paths: []
tls: []
nameOverride: ""
nodeSelector: {}
replicaCount: 1
resources: {}
service:
port: 80
type: ClusterIP
tolerations: []
HOOKS:
---
# test2-myhelm1-test-connection
apiVersion: v1
kind: Pod
metadata:
name: "test2-myhelm1-test-connection"
labels:
app.kubernetes.io/name: myhelm1
helm.sh/chart: myhelm1-0.1.0
app.kubernetes.io/instance: test2
app.kubernetes.io/managed-by: Tiller
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['test2-myhelm1:80']
restartPolicy: Never
MANIFEST:
---
# Source: myhelm1/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: test2-myhelm1
labels:
app.kubernetes.io/name: myhelm1
helm.sh/chart: myhelm1-0.1.0
app.kubernetes.io/instance: test2
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: myhelm1
app.kubernetes.io/instance: test2
---
# Source: myhelm1/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test2-myhelm1
labels:
app.kubernetes.io/name: myhelm1
helm.sh/chart: myhelm1-0.1.0
app.kubernetes.io/instance: test2
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: myhelm1
app.kubernetes.io/instance: test2
template:
metadata:
labels:
app.kubernetes.io/name: myhelm1
app.kubernetes.io/instance: test2
spec:
containers:
- name: myhelm1
image: "radial/busyboxplus:base"
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'sleep 60']
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
Very helpful, but too much information if we want to repeatedly edit and install our chart.
Right now I will not attempt to digest it all, let us reduce the output first.
Under HOOKS there is a test connection. That was useful to test the original nginx. We do not need it.
Around 20 lines later we find # Source: myhelm1/templates/service.yaml ... a kind: Service - we do not need that - we only want a running Pod.
Easy to fix, just edit .helmignore and add these 2 file names to the bottom.
nano ./myhelm1/.helmignore
test-connection.yaml
service.yaml
Our busybox Pod does not need ports or livenessProbes.
Delete lines 29 to 42 from deployment.yaml
nano ./myhelm1/templates/deployment.yaml
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
These labels below add no value to this tutorial therefore they are removed from the output from all helm install commands below.
labels:
app.kubernetes.io/name: myhelm1
helm.sh/chart: myhelm1-0.1.0
app.kubernetes.io/instance: test4
app.kubernetes.io/managed-by: Tiller
selector:
matchLabels:
app.kubernetes.io/name: myhelm1
app.kubernetes.io/instance: test4
metadata:
labels:
app.kubernetes.io/name: myhelm1
app.kubernetes.io/instance: test4
Let's redo our install.
helm install .\myhelm1\ --name test2 --dry-run --debug
[debug] Created tunnel using local port: '49976'
[debug] SERVER: "127.0.0.1:49976"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
NAME: test2
REVISION: 1
RELEASED: Thu Feb 14 09:09:55 2019
CHART: myhelm1-0.1.0
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: radial/busyboxplus
tag: base
ingress:
annotations: {}
enabled: false
hosts:
- chart-example.local
paths: []
tls: []
nameOverride: ""
nodeSelector: {}
replicaCount: 1
resources: {}
service:
port: 80
type: ClusterIP
tolerations: []
HOOKS:
MANIFEST:
---
# Source: myhelm1/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test2-myhelm1
spec:
replicas: 1
template:
spec:
containers:
- name: myhelm1
image: "radial/busyboxplus:base"
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'sleep 60']
Considerably shorter and worthy of explanation:
You can repeatedly make changes to your values and templates and test it via --dry-run --debug. It only shows what will happen, without doing it. Very useful: debug a helm install BEFORE it is done.
We are happy with our debug output, let us run the install.
helm install .\myhelm1\ --name test2
NAME: test2
LAST DEPLOYED: Thu Feb 14 09:12:01 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
test2-myhelm1 1 0 0 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
test2-myhelm1-5bd9bb65c7-6pr4q 0/1 ContainerCreating 0 0s
As expected, a deployment and its Pod. A few seconds later the Pod is running.
kubectl get pods
NAME READY STATUS RESTARTS AGE
test2-myhelm1-5bd9bb65c7-6pr4q 1/1 Running 0 10s
helm delete test2
release "test2" deleted
Values in values.yaml replace their placeholders in the template files.
Template files can also get their values from the user. Users pass values to software they install via --set flag on the install command.
This part of the tutorial demonstrates passing an imagePullPolicy on the command line.
No edit needed, just observe the last line of values file extract below.
The default values file must be named values.yaml.
nano ./myhelm1/values.yaml
# Default values for myhelm1.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: radial/busyboxplus
tag: base
pullPolicy: IfNotPresent
Now observe where in the template that gets used. ( Around line 22 - 25 )
nano ./myhelm1/templates/deployment.yaml
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
.Values.image.pullPolicy gets the value from
image:
pullPolicy: IfNotPresent
We used pullPolicy: IfNotPresent up till now in this tutorial. ( You may want to page up and see that is the case everywhere. )
Assume for this test run we do NOT want the image pulled from the repository. ( imagePullPolicy: Never )
From Kubernetes docs:
imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image.
See the dry run command below how we specify the policy via --set.
helm install .\myhelm1\ --set imagePullPolicy=Never --name test3 --dry-run --debug
[debug] Created tunnel using local port: '50101'
[debug] SERVER: "127.0.0.1:50101"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
NAME: test3
REVISION: 1
RELEASED: Thu Feb 14 10:10:37 2019
CHART: myhelm1-0.1.0
USER-SUPPLIED VALUES:
imagePullPolicy: Never
COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: radial/busyboxplus
tag: base
imagePullPolicy: Never
ingress:
annotations: {}
enabled: false
hosts:
- chart-example.local
paths: []
tls: []
nameOverride: ""
nodeSelector: {}
replicaCount: 1
resources: {}
service:
port: 80
type: ClusterIP
tolerations: []
HOOKS:
MANIFEST:
---
# Source: myhelm1/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test3-myhelm1
spec:
replicas: 1
template:
spec:
containers:
- name: myhelm1
image: "radial/busyboxplus:base"
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'sleep 60']
The USER-SUPPLIED VALUES seems correct: imagePullPolicy: Never
The COMPUTED VALUES: indicate we have a problem:
image:
pullPolicy: IfNotPresent
tag: base
imagePullPolicy: Never
Our --set policy does not replace the image policy.
They have different names and are at different levels in the yaml.
In the deployment we see: imagePullPolicy: IfNotPresent : override not successful.
Let's fix that: see attempt two below:
helm install .\myhelm1\ --set image.PullPolicy=Never --name test3 --dry-run --debug
[debug] Created tunnel using local port: '50107'
[debug] SERVER: "127.0.0.1:50107"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
NAME: test3
REVISION: 1
RELEASED: Thu Feb 14 10:14:11 2019
CHART: myhelm1-0.1.0
USER-SUPPLIED VALUES:
image:
PullPolicy: Never < - - - - - -
COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
PullPolicy: Never < - - - - - -
pullPolicy: IfNotPresent < - - - - - -
repository: radial/busyboxplus
Nearly there but still wrong. We now have two policies spelled differently. ( Lowercase first letter is the correct one that appears in the value file ).
Convention states we should name our values starting with a lowercase letter. Our values.yaml is correct. Our command line override is wrong.
Third attempt, see command below.
helm install .\myhelm1\ --set image.pullPolicy=Never --name test3 --dry-run --debug
[debug] Created tunnel using local port: '50113'
[debug] SERVER: "127.0.0.1:50113"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
NAME: test3
REVISION: 1
RELEASED: Thu Feb 14 10:15:10 2019
CHART: myhelm1-0.1.0
USER-SUPPLIED VALUES:
image:
pullPolicy: Never < - - - - - - - - - - - - -
COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
pullPolicy: Never
repository: radial/busyboxplus
tag: base
ingress:
annotations: {}
enabled: false
hosts:
- chart-example.local
paths: []
tls: []
nameOverride: ""
nodeSelector: {}
replicaCount: 1
resources: {}
service:
port: 80
type: ClusterIP
tolerations: []
HOOKS:
MANIFEST:
---
# Source: myhelm1/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test3-myhelm1
spec:
replicas: 1
template:
spec:
containers:
- name: myhelm1
image: "radial/busyboxplus:base"
imagePullPolicy: Never < - - - - - - - - - - -
command: ['sh', '-c', 'sleep 60']
The deployment above shows imagePullPolicy: Never ... override a success.
COMPUTED VALUES show override is done correctly.
COMPUTED VALUES:
image:
pullPolicy: Never
Debug output looks good. We are ready to install this release live.
I want to hide all the other values we do not need. We do this by commenting it out. Edit your values file so that only first 5 values and not commented out.
nano ./myhelm1/values.yaml
# Default values for myhelm1.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: radial/busyboxplus
tag: base
pullPolicy: IfNotPresent
#nameOverride: ""
#fullnameOverride: ""
#service:
# type: ClusterIP
# port: 80
#ingress:
# enabled: false
# annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# paths: []
# hosts:
# - chart-example.local
# tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
#resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
#nodeSelector: {}
#tolerations: []
#affinity: {}
Let's install our chart.
helm install .\myhelm1\ --set image.pullPolicy=Never --name test3 --dry-run --debug
[debug] Created tunnel using local port: '50125'
[debug] SERVER: "127.0.0.1:50125"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
Error: render error in "myhelm1/templates/ingress.yaml": template: myhelm1/templates/ingress.yaml:1:14: executing "myhelm1/templates/ingress.yaml" at <.Values.ingress.enab...>: can't evaluate field enabled in type interface {}
Values.ingress.enabled is used in myhelm1/templates/ingress.yaml
We do not need ingress - that is part of nginx chart we started with.
Add ingress.yaml at bottom of our ignore file.
nano ./myhelm1/.helmignore
ingress.yaml
Second attempt: install myhelm1 chart with image.pullPolicy=Never
plus we added --set replicaCount=3
helm install .\myhelm1\ --set image.pullPolicy=Never --set replicaCount=3 --name test3 --dry-run --debug
[debug] Created tunnel using local port: '50140'
[debug] SERVER: "127.0.0.1:50140"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
NAME: test3
REVISION: 1
RELEASED: Thu Feb 14 10:23:43 2019
CHART: myhelm1-0.1.0
USER-SUPPLIED VALUES:
image:
pullPolicy: Never < * * * = = = = = = = = = = = = =
replicaCount: 3 < - - - - - - - - - - - - - - - -
COMPUTED VALUES:
image:
pullPolicy: Never < * * * = = = = = = = = = = = = =
repository: radial/busyboxplus
tag: base
replicaCount: 3 < - - - - - - - - - - - - - - - -
HOOKS:
MANIFEST:
---
# Source: myhelm1/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test3-myhelm1
spec:
replicas: 3 < - - - - - - - - - - - - - - - - - -
template:
spec:
containers:
- name: myhelm1
image: "radial/busyboxplus:base"
imagePullPolicy: Never < * * * = = = = = = = = = = = = =
command: ['sh', '-c', 'sleep 60']
--set replicaCount correctly overrides the value in deployment.yaml
Let's do a live install.
helm install .\myhelm1\ --set image.pullPolicy=Never --set replicaCount=3 --name test3
NAME: test3
LAST DEPLOYED: Thu Feb 14 10:34:45 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
test3-myhelm1 3 0 0 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
test3-myhelm1-878d8d7c-7xshs 0/1 Pending 0 0s
test3-myhelm1-878d8d7c-fnjqn 0/1 ContainerCreating 0 0s
test3-myhelm1-878d8d7c-gjw4m 0/1 Pending 0 0s
Success. Deployment DESIRED is 3 and we see 3 Pods being created.
Seconds later we have 3 running Pods. Note the use of helm status command.
helm status test3
LAST DEPLOYED: Thu Feb 14 10:34:45 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
test3-myhelm1 3 3 3 3 20s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
test3-myhelm1-878d8d7c-7xshs 1/1 Running 0 20s
test3-myhelm1-878d8d7c-fnjqn 1/1 Running 0 20s
test3-myhelm1-878d8d7c-gjw4m 1/1 Running 0 20s
Demo complete. Delete our release test3.
helm delete test3
release "test3" deleted
So far we have deleted values from values.yaml.
We also passed override values on the command line.
Now we create our own new value: terminationGracePeriodSeconds
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#podspec-v1-core
terminationGracePeriodSeconds
Optional duration in seconds the pod needs to terminate gracefully. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds.
Add terminationGracePeriodSeconds: 30 to your values file so that your line 5 - 12 looks like below:
nano ./myhelm1/values.yaml
replicaCount: 1
terminationGracePeriodSeconds: 30
image:
repository: radial/busyboxplus
tag: base
pullPolicy: IfNotPresent
Edit your deployment file so that it uses this new value ( line 22 to 29 should be as below )
nano ./myhelm1/templates/deployment.yaml
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
command: ['sh', '-c', 'sleep 60']
Do a dry run.
helm install .\myhelm1\ --name test4 --dry-run --debug
[debug] Created tunnel using local port: '50239'
[debug] SERVER: "127.0.0.1:50239"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
NAME: test4
REVISION: 1
RELEASED: Thu Feb 14 10:54:58 2019
CHART: myhelm1-0.1.0
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
image:
pullPolicy: IfNotPresent
repository: radial/busyboxplus
tag: base
replicaCount: 1
terminationGracePeriodSeconds: 30 < - - - - - - -
HOOKS:
MANIFEST:
---
# Source: myhelm1/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test4-myhelm1
spec:
replicas: 1
template:
spec:
containers:
- name: myhelm1
image: "radial/busyboxplus:base"
imagePullPolicy: IfNotPresent
terminationGracePeriodSeconds: 30 < - - - - - -
command: ['sh', '-c', 'sleep 60']
Success. COMPUTED VALUES: shows it correctly and deployment at bottom uses it correctly.
One more test: let us debug test override the terminationGracePeriodSeconds value with a 10.
helm install .\myhelm1\ --set terminationGracePeriodSeconds=10 --name test4 --dry-run --debug
[debug] Created tunnel using local port: '50245'
[debug] SERVER: "127.0.0.1:50245"
[debug] Original chart version: ""
[debug] CHART PATH: C:\k8\myhelm1
NAME: test4
REVISION: 1
RELEASED: Thu Feb 14 10:56:33 2019
CHART: myhelm1-0.1.0
USER-SUPPLIED VALUES:
terminationGracePeriodSeconds: 10 < - - - - - -
COMPUTED VALUES:
image:
pullPolicy: IfNotPresent
repository: radial/busyboxplus
tag: base
replicaCount: 1
terminationGracePeriodSeconds: 10 < - - - - - -
HOOKS:
MANIFEST:
---
# Source: myhelm1/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test4-myhelm1
spec:
replicas: 1
template:
spec:
containers:
- name: myhelm1
image: "radial/busyboxplus:base"
imagePullPolicy: IfNotPresent
terminationGracePeriodSeconds: 10 < - - - - - -
command: ['sh', '-c', 'sleep 60']
Success. COMPUTED VALUES: shows 10 correctly and deployment at bottom uses 10 correctly.
We did not even look at _helpers.tpl or the charts directory. ( That deals with dependencies. That is a topic for another tutorial in this set. )
We made several changes to our values file as well as the deployment file and saw its results via debug and via live install commands.
You are also able to hide irrelevant files from a chart. ( .helmignore )
At work you will create your own skeleton basic charts that you copy from. Obviously it will start off MUCH more perfectly in line with exactly what you want to do.
We learned basic Helm concepts on day one by hacking a nginx chart to our requirements. --dry-run --debug is Helm's best feature: dry run and debug before install.
2,599 posts | 762 followers
FollowAlibaba Clouder - October 29, 2019
Alibaba Clouder - December 17, 2019
Alibaba Cloud Native Community - September 16, 2022
Alibaba Developer - June 21, 2021
Alibaba Cloud Community - August 12, 2022
Alibaba Cloud Native Community - December 23, 2021
2,599 posts | 762 followers
FollowAlibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.
Learn MoreA secure image hosting platform providing containerized image lifecycle management
Learn MoreElastic and secure virtual cloud servers to cater all your cloud hosting needs.
Learn MoreMore Posts by Alibaba Clouder