By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud's incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.
This set of tutorials focuses on giving you practical experience on using Docker Compose when working with containers on Alibaba Cloud Elastic Compute Service (ECS).
Part 1 of this series demonstrated several docker-compose configuration options that can be explored in isolation. In Part 2, we will look at several important docker-compose configurations including stop_grace_period, namespaced, ulimits, configs, and secrets.
Let's get started.
From https://docs.docker.com/compose/compose-file/#stop_grace_period
By default, Docker waits 10 seconds for the container to exit before sending SIGKILL.
Specify how long to wait when attempting to stop a container if it doesn't handle SIGTERM (or whatever stop signal has been specified with stop_signal), before sending SIGKILL.
Till now I did not have this setting ( stop_grace_period ) in my docker-compose files. Therefore I needed to specify -t 0 as seen below:
docker-compose up -d -t 0
-t 0 specifies that docker-compose must wait zero seconds before killing the container - if it does not die gracefully in that time. The default value for that timeout is 10 seconds. So every time I do a docker-compose up I / we have to wait 10 seconds before it is finally killed and then brought back up again.
stop_grace_period in the docker-compose file allows us to specify our timeout = 0 value only there once off.
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
stop_grace_period: 0s
Run:
docker-compose up -d
Make a minor change to the sleep timeout value, and rerun
docker-compose up -d
Make another minor change to the sleep timeout value, and rerun
docker-compose up -d
See how it gets recreated almost instantly every time.
Now change stop_grace_period to the default value of 10s.
docker-compose up -d
Make a minor change to the sleep timeout value, and rerun
docker-compose up -d
Make another minor change to the sleep timeout value, and rerun
docker-compose up -d
See how the recreation takes 10 seconds each time.
If you look at the output of docker events you will see the details:
stop_grace_period: 0s
2018-11-05T14:46:34.968709389+02:00 container kill .... lots of information ... signal=15
2018-11-05T14:46:34.984262101+02:00 container kill .... lots of information ... signal=9
stop_grace_period: 10s
2018-11-05T14:47:49.486907072+02:00 container kill .... lots of information ... signal=15
2018-11-05T14:47:59.510613956+02:00 container kill .... lots of information ... signal=9
signal = 15
SIGTERM is the termination signal. Kill the process, but allow it to do its cleanup routines.
signal = 9
SIGKILL is the kill signal. Kill the process: immediately. The process cannot catch and process the signal on its own terms, it cannot cleanup.
Based on events output above, stop_grace_period: 0s waits 0.02 seconds from SIGTERM before it goes to SIGKILL.
Based on events output above, stop_grace_period: 10s waits 10+ seconds from SIGTERM before it goes to SIGKILL.
For the rest of these tutorials all the docker-compose files will contain: stop_grace_period: 0s
Note: you must determine an appropriate stop_grace_period for your production work environment. This will differ from one app to another.
sysctls is used to set kernel parameters to set in the container.
What are kernel parameters?
Linux lets you set resource limits using kernel parameters. ulimit sets resource limits on a user level. Kernel parameters applies to everyone, root included.
You can check out the official reference information at https://www.kernel.org/doc/Documentation/sysctl/kernel.txt
Running man sysctl at Linux shell details how to configure kernel parameters at runtime.
Let's start up our container again, using
docker-compose up -d -t 0
Enter the container using:
docker exec -it compose-tuts_alpine_1 /bin/sh
Enter the following 3 commands at the prompt shown. It will show the current actual values for those 3 kernel parameters.
cat /proc/sys/net/core/somaxconn
cat /proc/sys/kernel/msgmax
cat /proc/sys/kernel/shmmax
Expected output :
/ # cat /proc/sys/net/core/somaxconn
128
/ # cat /proc/sys/kernel/msgmax
8192
/ # cat /proc/sys/kernel/shmmax
18446744073692774399
/ # exit
We are now going to modify those 3 values, and investigate the container to see if those values got applied.
Add the following to your docker-compose.yml using
nano docker-compose.yml
Its content:
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
sysctls:
net.core.somaxconn: 512
kernel.shmmax: 18102030100020003000
kernel.msgmax: 4000
Run:
docker-compose up -d -t 0
docker exec -it compose-tuts_alpine_1 /bin/sh
Enter the following 3 commands at the prompt shown. It will show the current NEW actual values for those 3 kernel parameters.
cat /proc/sys/net/core/somaxconn
cat /proc/sys/kernel/msgmax
cat /proc/sys/kernel/shmmax
Expected output :
/ # cat /proc/sys/net/core/somaxconn
512
/ # cat /proc/sys/kernel/msgmax
4000
/ # cat /proc/sys/kernel/shmmax
18102030100020003000
/ # exit
As you can see all 3 those kernel parms got changed.
You just experienced that Docker allows you to tune kernel parms on an individual container level.
From https://docs.docker.com/compose/compose-file/#sysctls
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
( Continued from previous section, with important heading added )
Important:
Not all sysctls are namespaced. Docker does not support changing
sysctls inside of a container that also modify the host systemCURRENTLY SUPPORTED SYSCTLS
kernel.msgmax, kernel.msgmnb, kernel.msgmni, kernel.sem,
kernel.shmall, kernel.shmmax, kernel.shmmni, kernel.shm_rmid_forcedSysctls beginning with fs.mqueue.*
Sysctls beginning with net.*
This important text should be part of the https://docs.docker.com/compose/compose-file/#sysctls documentation.
I tried to change fs.file-max and got this error:
ERROR: for compose-tuts_alpine_1 Cannot start service alpine: OCI runtime create failed: sysctl "fs.file-max" is not in a separate kernel namespace: unknown
Now I understand: fs.file-max is not namespaced. Docker does not support changing sysctls inside of a container that also modify the HOST system
fs.file-max changes ( in the container ) would have changed that setting on the HOST server, which is not allowed.
Namespaces are a fundamental aspect of containers on Linux. See https://en.wikipedia.org/wiki/Linux_namespaces
Namespaces are what allows a container to exist in its isolated bubble environment. Namespaces let containers think they are full Linux distros - running all alone on their servers.
fs.file-max is an example of a Linux feature that is not currently able to be namespace isolated in a container.
So non-namespaced kernel parameters must be tuned on the HOST server - to be appropriate for all the containers that run on it.
Ulimit provides control over the resources ( such as sizes, cpu time, priorities) available to the shell and to processes started by it.
You can use it to ensure applications with bugs do not overload and crash your server.
In the context of Docker you can use it inside containers to similarly limit applications running inside containers.
Use man ulimit at shell prompt to read the official documentation about it - scroll to bottom to find ulimit.
Let's prove that ulimits get enforced inside containers. Let's set max number of processes and max number of open files absurd low and start up a container.
Add the following to your docker-compose.yml using
nano docker-compose.yml
# add this content
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 60171
stop_grace_period: 0s
ulimits:
nproc: 2
nofile:
soft: 2
hard: 4
Try and start up the container:
docker-compose up -d -t 0
Expected output :
Recreating compose-tuts_alpine_1 ... error
ERROR: for compose-tuts_alpine_1 Cannot start service alpine: OCI runtime create failed: container_linux.go:348: starting container process caused "open /proc/self/fd: too many open files": unknown
As expected: too many open files error. Containers are tiny, but need more than 4 files open to start up.
Change the nproc ( number of processes to 1. Change both nofile limits to 40 or more.
Rerun:
docker-compose up -d -t 0
It starts up perfect ( on my CentOS 7 server ).
Limiting nproc requires kernel 4.3 or higher - my server kernel version is 3.10.0-327.el7.x86_64. Get your kernel version by running uname -r.
Let's set a ulimit that works: fsize - maximum filesize (KB)
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 60171
stop_grace_period: 0s
ulimits:
fsize: 10
Run:
docker-compose up -d -t 0
docker exec -it compose-tuts_alpine_1 /bin/sh
Let's exceed that file size limit of 10KB by creating a 10MB file:
/ # dd if=/dev/zero of=/tmp/output.dat bs=1M count=10
The container just exists - my shell session crashed.
At some other time I got an error message :
dd if=/dev/zero of=/tmp/output.dat bs=1M count=10
File size limit exceeded (core dumped)
Configs declare configuration files for applications inside your containers need. Configuration files like those normally found inside /etc and F/opt.
You should only store non-sensitive information in these configs. docker-compose secrets are there to store secret information.
First we need to create a small 2 config files so that we can refer to it in our docker-compose.yml .
First we create the first config file:
nano config_data
'# config data
Now create the second config file:
nano my_second_config.config
'# my_second_config.config contents
Now we need Docker to create this SECOND FILE ONLY as a config named: my_second_configF
docker config create my_second_config my_second_config.config
These 2 different configs will demo 2 different ways configs can be used.
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
configs:
- my_first_config
- my_second_config
configs:
my_first_config:
file: ./config_data
my_second_config:
external: true
The top-level configs declaration ( bottom 5 lines) defines 2 configs that can be granted to the services in this stack.
The config at the service level ( around line 7 to 10 ) grants the container access to the 2 configs.
You must use both those config declarations.
Note that my_second_config is defined as: external: true. It exists as a config object in Docker.
docker-compose up does not support 'configs' configuration. We must use docker stack deploy to deploy to a swarm.
docker swarm init
docker stack deploy -c docker-compose.yml mystack
docker ps -a
Expected output :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ab50c7daf979 alpine:3.8 "sleep 600" 14 seconds ago Up 13 seconds mystack_alpine.1.jq3buvzkf2a3hpn7mwb0e43om
We have to run docker ps to get the automatically generated container name.
Now that we have that random generated container name we can enter it via exec: The random sequence for your container will be different. Use YOUR container name to exec into it.
docker exec -it mystack_alpine.1.jq3buvzkf2a3hpn7mwb0e43om /bin/sh
Expected output :
/ # ls
bin lib my_second_config sbin usr
dev media proc srv var
etc mnt root sys
home my_first_config run tmp
/ # cat my_first_config
'# config data
/ # cat my_second_config
'# my_second_config.config contents
/ # exit
Note that those configs are mounted in the / directory.
Let's mount them in directories where Linux administrators would expect to find config.
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
configs:
- source: my_first_config
target: /etc/my_first_config
- source: my_second_config
target: /opt/my_second_config
configs:
my_first_config:
file: ./config_data
my_second_config:
external: true
Run:
docker stack rm mystack
Note the 2 different target paths: /etc and /opt.
docker stack deploy -c docker-compose.yml mystack
Expected output :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01156eb13576 alpine:3.8 "sleep 600" 4 seconds ago Up 2 seconds mystack_alpine.1.vg2m0ge161anuoz31c2mdgf1k
Let's enter our container and investigate if our configs are in the requested directories.
docker exec -it mystack_alpine.1.vg2m0ge161anuoz31c2mdgf1k /bin/sh
Expected output :
/ # ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr
/ # cat /etc/my_first_config
'# config data
/ # cat /opt/my_second_config
'# my_second_config.config contents
/ # exit
The ls confirms our configs are no longer in the / directory.
The 2 cat commands show our configs to be inside the requested directories.
Secrets work VERY similar to configs as explained above. The major difference is that the contents of secrets are encrypted.
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
secrets:
- my_secret
secrets:
my_secret:
external: true
Close down any existing stacks:
docker-compose down -t 0
docker stack rm mystack
docker container prune -f
Let's create my_secret:
echo a secret password | docker secret create my_secret -
The hyphen at the end means docker must read from stdin ( the echo text in this case ).
If you now run docker secret ls you will you have the secret listed.
docker stack deploy -c docker-compose.yml mystack
We have to run docker ps to get the automatically generated container name.
docker ps -a
Now that we have that random generated container name we can enter it via exec: The random sequence for your container will be different. Use YOUR container name to exec into it.
docker exec -it mystack_alpine.1.xrgtrrfnwn2qet5pevj5n9wne /bin/sh
Run commands as shown:
- df to show /run/secrets/my_secret exist - in tmpfs - in ram.
- cat /run/secrets/my_secret ... to see the secret.
/ # df -h
Filesystem Size Used Available Use% Mounted on
/dev/mapper/docker-253:1-388628-c16342a3e1f1bfcdcebb82872fa626a5f35a2bea4e535aa9a889069b85c63332
10.0G 37.3M 10.0G 0% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 492.6M 0 492.6M 0% /sys/fs/cgroup
/dev/mapper/centos00-root
12.6G 5.5G 7.1G 43% /etc/resolv.conf
/dev/mapper/centos00-root
12.6G 5.5G 7.1G 43% /etc/hostname
/dev/mapper/centos00-root
12.6G 5.5G 7.1G 43% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 492.6M 4.0K 492.6M 0% /run/secrets/my_secret
tmpfs 492.6M 0 492.6M 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 64.0M 0 64.0M 0% /proc/timer_stats
tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs 492.6M 0 492.6M 0% /proc/scsi
tmpfs 492.6M 0 492.6M 0% /sys/firmware
/ # cat /run/secrets/my_secret
a secret password
/ # exit
If you run command below you will not see the secret itself:
docker inspect my_secret
[
{
"ID": "vjvqnag6nu0p87xc0o94p315g",
"Version": {
"Index": 386
},
"CreatedAt": "2018-11-06T12:05:40.984748215Z",
"UpdatedAt": "2018-11-06T12:05:40.984748215Z",
"Spec": {
"Name": "my_secret",
"Labels": {}
}
}
]
To see a list of all secrets on your server, run
docker secret ls
2,599 posts | 764 followers
FollowAlibaba Clouder - January 24, 2019
Alibaba Clouder - January 24, 2019
Alibaba Clouder - January 25, 2019
Alibaba Clouder - January 25, 2019
Alibaba Clouder - July 24, 2020
Apache Flink Community China - April 23, 2020
2,599 posts | 764 followers
FollowElastic and secure virtual cloud servers to cater all your cloud hosting needs.
Learn MoreLearn More
A secure image hosting platform providing containerized image lifecycle management
Learn MoreMore Posts by Alibaba Clouder