By Alwyn Botha, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud's incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.
This set of tutorials focuses on giving you practical experience on using Docker Compose when working with containers on Alibaba Cloud Elastic Compute Service (ECS).
Part 3 of this series explored depends_on, volumes and the important init docker-compose options. In Part 4, we will look at some productivity tips and best practices of running Docker compose limits.
I have several bash aliases defined for frequently used Docker commands. Here are just two:
alias nnc='nano docker-compose.yml'
alias psa='docker ps -a'
Typing psa at shell is quicker than highlighting docker ps -a text in tutorial, then copying it, then alt-tab to console window, then pasting.
Faster docker-compose.yml edits:
Without using extensions and aliases, this process would involve around 12 steps instead of the 6 above.
The docker-compose placement constraints are used to limit the nodes / servers where a task can be scheduled / ran by defining constraints.
First we need to define some labels for our node / server. Then we can define placement constraints based on those labels.
Syntax to add a label to a node:
docker node update --label-add label-name=label-value hostname-of-node
We need our server hostname for this. Enter hostname at shell to get YOUR hostname.
Use your hostname below: ( I used localhost.localdomain = my hostname )
docker node update --label-add tuts-allowed=yes localhost.localdomain
docker node update --label-add has-ssd=yes localhost.localdomain
We can inspect our node to see those labels exist.
head -n 13 only shows the top / head 13 lines of the long inspect output.
docker node inspect self | head -n 13
Expected output :
[
{
"ID": "wpk3r9ypjd8f0p3koh1dikvie",
"Version": {
"Index": 443
},
"CreatedAt": "2018-11-06T09:29:29.644400514Z",
"UpdatedAt": "2018-11-07T09:55:18.065758325Z",
"Spec": {
"Labels": {
"has-ssd": "yes",
"tuts-allowed": "yes"
},
Now we add the placement contraints right at bottom of docker-compose.yml.
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
deploy:
placement:
constraints:
- node.labels.has-ssd == yes
- node.labels.tuts-allowed == yes
The stack deploy command will only place our stack of services on nodes that match both constraints. So its an AND match - adding more constraints will be AND matched. ( There is no OR and no nested brackets like in nearly all programming languages. )
Since both constraints do match our node, the deploy will be successful.
docker stack deploy -c docker-compose.yml mystack
Let's list running containers:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
42f8a9b43faf alpine:3.8 "sleep 600" 12 seconds ago Up 11 seconds mystack_alpine.1.ezwejfbrmhbk0k5yd9a53ei85
Success. Let's list all the services in mystack.
docker stack services mystack
ID NAME MODE REPLICAS IMAGE PORTS
ji5dnn9klinx mystack_alpine replicated 1/1 alpine:3.8
Success. See REPLICAS column: 1 running service out of 1 requested service.
Let's now change the constraint tests so that the deploy cannot be done.
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
deploy:
placement:
constraints:
- node.labels.has-ssd == yeszzz
- node.labels.tuts-allowed == yeszzz
Remove previously deployed stack:
docker stack rm mystack
docker stack deploy -c docker-compose.yml mystack
Let's list all the services in mystack.
docker stack services mystack
ID NAME MODE REPLICAS IMAGE PORTS
jdg2bgxx5nfa mystack_alpine replicated 0/1 alpine:3.8
Deploy did not succeed in finding any suitable nodes. See REPLICAS column: 0 running service out of 1 requested service.
Examples of how node labels can be used:
You can find more information about placemente here: https://docs.docker.com/compose/compose-file/#placement
and about constraints here: https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints-constraint
Specify the number of containers that should be running at any given time.
Till now you ran with just one replica - the default.
Let's demo running 3 replicas.
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
deploy:
replicas: 3
Remove previous running mystack
docker stack rm mystack
Expected output :
Removing service mystack_alpine
Removing network mystack_default
Deploy our new stack:
docker stack deploy -c docker-compose.yml mystack
Expected output :
Creating network mystack_default
Creating service mystack_alpine
Output does not look promising - no mention of 3 replicas. Let's list the services in mystack
docker stack services mystack
Expected output :
ID NAME MODE REPLICAS IMAGE PORTS
pv2ebn95au9j mystack_alpine replicated 3/3 alpine:3.8
3 replicas running out of 3 requested. Success.
Let's list running containers:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
80de65e6e0d3 alpine:3.8 "sleep 600" 9 seconds ago Up 6 seconds mystack_alpine.2.51hlsv3s7ky5zr02fprjaxi59
440548cfcc7d alpine:3.8 "sleep 600" 9 seconds ago Up 6 seconds mystack_alpine.1.za68nt6704xobu2cxbz7x7p3l
19564317375f alpine:3.8 "sleep 600" 9 seconds ago Up 7 seconds mystack_alpine.3.1ut387z38e2hrlmahalp7hfsa
As expected: 3 containers running.
Investigate our server work load via top
top - 09:45:33 up 2:02, 2 users, load average: 0.00, 0.01, 0.05
Tasks: 133 total, 1 running, 132 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.3 us, 0.2 sy, 0.0 ni, 99.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 985.219 total, 472.004 free, 171.848 used, 341.367 buff/cache
MiB Swap: 1499.996 total, 1499.996 free, 0.000 used. 639.379 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
939 root 20 0 950.7m 105.9m 29.2m S 0.7 10.8 1:19.36 dockerd
946 root 20 0 424.8m 29.1m 12.1m S 2.9 0:14.14 docker-containe
5082 root 20 0 7.2m 2.7m 2.0m S 0.3 docker-containe
4950 root 20 0 7.2m 2.6m 2.0m S 0.3 docker-containe
5075 root 20 0 7.2m 2.4m 1.9m S 0.2 docker-containe
Each of our 3 tiny containers use around 2.5 MB ram residently.
You now have 3 full Alpine Linux distros running in isolated environments - 2.5 MB in ram size each. Impressive.
Compare this to having 3 separate virtual machines. Each such VM would need 50 MB overhead just to exist. Plus it would need several 100 MBs of diskspace each.
Each container started up in around 300 ms, which is not possible with VMs.
We use the reservations: cpu config settings to reserve cpu capacity.
Let's over-provision cpu to see if Docker follows our instructions.
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
deploy:
replicas: 6
resources:
reservations:
cpus: '0.5'
My server has 2 core, so 2 cpus are available.
In this configuration I am attempting to provision 6 * .5 = 3 cpus.
You need to edit those settings to cause this to fail on your tiny laptop or at your monster employer super-server.
Let's remove existing stacks.
docker stack rm mystack
Let's attempt deployment:
docker stack deploy -c docker-compose.yml mystack
UNEXPECTED output :
Creating service mystack_alpine
failed to create service mystack_alpine: Error response from daemon: network mystack_default not found
Sometimes the above happens, just rerun the deploy till error no longer appears.
Investigate the result of the deploy:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a8861c79e3e8 alpine:3.8 "sleep 600" 40 seconds ago Up 37 seconds mystack_alpine.6.pgtvnbpxpy4ony26pmp8ekdv1
3d2a0b8e52e9 alpine:3.8 "sleep 600" 40 seconds ago Up 37 seconds mystack_alpine.5.j52dwbe7qqx0nn5nhanp742n4
5c2674b7fa36 alpine:3.8 "sleep 600" 40 seconds ago Up 37 seconds mystack_alpine.1.bb1ocs3zkz730rp9bpf6s9jux
f984a8d52393 alpine:3.8 "sleep 600" 40 seconds ago Up 38 seconds mystack_alpine.4.mr5ktkei9pn1dzhkggq2e48o9
4 containers listed. This makes sense : 4 * .5 = 2 cpus used.
List all services in mystack:
docker stack services mystack
ID NAME MODE REPLICAS IMAGE PORTS
7030g8ila28h mystack_alpine replicated 4/6 alpine:3.8
Only 4 of 6 containers provisioned: Docker ran out of cpus to provision.
Important: this was a reservation provision. It can only reserve what exists.
Let's over provision RAM. ( We will use this functionality correctly later in this tutorial. )
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
deploy:
replicas: 6
resources:
memory: 2000M
My server has 1 GB ram available.
In this configuration I am attempting to provision 6 * 2 = 12 MB.
As before: you need to edit those settings to cause this to fail on your tiny laptop or at your monster employer super-server.
Let's remove existing stacks.
docker stack rm mystack
Let's attempt deployment:
docker stack deploy -c docker-compose.yml mystack
To list running stack services run:
docker stack services mystack
Expected output :
ID NAME MODE REPLICAS IMAGE PORTS
l7yr2m5k6edf mystack_alpine replicated 0/6 alpine:3.8
Zero services deployed. Docker does not even deploy one container using 50% of the reserved ram specified. It assumes correctly - if you specify a RAM reservation your container needs that MINIMUM to run successfully. Therefore if the reservation is impossible the container does not start.
We have seen resources limits are obeyed.
Let's define reasonable limits to see how it works.
The Alpine service below is constrained to use no more than 20M of memory and 0.50 (50%) of available processing time (CPU).
Add the following to your docker-compose.yml using
nano docker-compose.yml
version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
deploy:
replicas: 1
resources:
limits:
cpus: '0.5'
memory: 20M
Note we only need one replica from here onwards.
docker stack rm mystack
Deploy our stack:
docker stack deploy -c docker-compose.yml mystack
docker ps -a
Expected output :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
11b8e8c2838b alpine:3.8 "sleep 600" 3 seconds ago Up 1 second mystack_alpine.1.qsamffjd1vg0137off9xinyzg
Our container is running. Let's enter it and benchmark cpu speed.
docker exec -it mystack_alpine.1.qsamffjd1vg0137off9xinyzg /bin/sh
Enter commands shown at the container / # prompt:
/ # time dd if=/dev/urandom bs=1M count=2 | md5sum
2+0 records in
2+0 records out
real 0m 0.57s
user 0m 0.00s
sys 0m 0.28s
064be3476682daf856bb32fa00d29e2e -
/ # exit
Benchmark explanation:
We have bench times for cpu limit 0.5 - which means nothing if we cannot compare it.
So let us change cpu limit to 0.25 in docker-compose.yml
cpus: '0.25'
Then run at shell:
docker stack rm mystack
docker stack deploy -c docker-compose.yml mystack
docker ps -a # to get our container name
docker exec -it mystack_alpine.1.cbaakbi027ue0c1rtj0z463qz /bin/sh
Rerun our bencmark:
/ # time dd if=/dev/urandom bs=1M count=2 | md5sum
2+0 records in
2+0 records out
real 0m 1.27s
user 0m 0.00s6d9b25e860ebef038daa165ae491c965 -
sys 0m 0.30s
/ # time dd if=/dev/urandom bs=1M count=2 | md5sum
2+0 records in
2+0 records out
real 0m 1.33s
ed29ebf0ef70923f9b980c65495767eb -
user 0m 0.00s
sys 0m 0.33s
/ # exit
Results make sense - around 50% slower - with 50% less cpu power available.
Quick final test: 100% cpu power
So let us change cpu limit to 1.00 in docker-compose.yml
cpus: '1.00'
then run at shell:
docker stack rm mystack
docker stack deploy -c docker-compose.yml mystack
docker ps -a
docker exec -it your-container-name /bin/sh
Enter commands shown:
/ # time dd if=/dev/urandom bs=1M count=2 | md5sum
2+0 records in
2+0 records out
real 0m 0.25s
user 0m 0.00s
sys 0m 0.24s
facbf070f7328db3321ddffca3c4239e -
/ # time dd if=/dev/urandom bs=1M count=2 | md5sum
2+0 records in
2+0 records out
616ba74d54b8a176f559f41b224bc3a3 -real 0m 0.29s
user 0m 0.00s
sys 0m 0.28s
/ # exit
Very fast runtimes: 100% cpu limit is 4 times faster than 25% cpu power
You now have experienced that limiting cpu power per container works as expected.
If you have only one production server, you can use this knowledge to run cpu-hungry batch processes on the same server as other work - just limit the batch process cpu severely.
This configuration option limits max RAM usage for your container.
Add the following to your docker-compose.yml using
nano docker-compose.yml
Version: "3.7"
services:
alpine:
image: alpine:3.8
command: sleep 600
deploy:
replicas: 1
resources:
limits:
cpus: '1.00'
memory: 4M
Run:
docker stack rm mystack
docker stack deploy -c docker-compose.yml mystack
docker ps -a
Expected output :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e0105ce94fd alpine:3.8 "sleep 600" 3 seconds ago Up 1 second mystack_alpine.1.ykn9fdmeaudp4ezar7ev19111
We now have a running container with a memory limit of 4MB.
Let's be a typical inquisitive Docker administrator and see what happens if we use 8 MB /dev/shm RAM .
docker exec -it mystack_alpine.1.ykn9fdmeaudp4ezar7ev19111 /bin/sh
Enter commands as shown:
/ # df -h
Filesystem Size Used Available Use% Mounted on
/dev/mapper/docker-253:1-388628-c88293aae6b79e197118527c00d64fee14aec2acfb49e5f1ec95bc6af6bd874b
10.0G 37.3M 10.0G 0% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 492.6M 0 492.6M 0% /sys/fs/cgroup
/dev/mapper/centos00-root
12.6G 5.5G 7.2G 43% /etc/resolv.conf
/dev/mapper/centos00-root
12.6G 5.5G 7.2G 43% /etc/hostname
/dev/mapper/centos00-root
12.6G 5.5G 7.2G 43% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 492.6M 0 492.6M 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 64.0M 0 64.0M 0% /proc/timer_stats
tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs 492.6M 0 492.6M 0% /proc/scsi
tmpfs 492.6M 0 492.6M 0% /sys/firmware
/ # dd if=/dev/zero of=/dev/shm/fill bs=1M count=4
4+0 records in
4+0 records out
/ # df -h
Filesystem Size Used Available Use% Mounted on
/dev/mapper/docker-253:1-388628-c88293aae6b79e197118527c00d64fee14aec2acfb49e5f1ec95bc6af6bd874b
10.0G 37.3M 10.0G 0% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 492.6M 0 492.6M 0% /sys/fs/cgroup
/dev/mapper/centos00-root
12.6G 5.5G 7.2G 43% /etc/resolv.conf
/dev/mapper/centos00-root
12.6G 5.5G 7.2G 43% /etc/hostname
/dev/mapper/centos00-root
12.6G 5.5G 7.2G 43% /etc/hosts
shm 64.0M 4.0M 60.0M 6% /dev/shm
tmpfs 492.6M 0 492.6M 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 64.0M 0 64.0M 0% /proc/timer_stats
tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs 492.6M 0 492.6M 0% /proc/scsi
tmpfs 492.6M 0 492.6M 0% /sys/firmware
/ # dd if=/dev/zero of=/dev/shm/fill bs=1M count=8
Killed
/ # df -h
Filesystem Size Used Available Use% Mounted on
/dev/mapper/docker-253:1-388628-c88293aae6b79e197118527c00d64fee14aec2acfb49e5f1ec95bc6af6bd874b
10.0G 37.3M 10.0G 0% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 492.6M 0 492.6M 0% /sys/fs/cgroup
/dev/mapper/centos00-root
12.6G 5.5G 7.2G 43% /etc/resolv.conf
/dev/mapper/centos00-root
12.6G 5.5G 7.2G 43% /etc/hostname
/dev/mapper/centos00-root
12.6G 5.5G 7.2G 43% /etc/hosts
shm 64.0M 5.4M 58.6M 9% /dev/shm
tmpfs 492.6M 0 492.6M 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 64.0M 0 64.0M 0% /proc/timer_stats
tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs 492.6M 0 492.6M 0% /proc/scsi
tmpfs 492.6M 0 492.6M 0% /sys/firmware
/ # exit
Explanation of what happened above:
First we run df -h to determine /dev/shm size and usage:
shm 64.0M 0 64.0M 0% /dev/shm
Then we add 4MB to /dev/shm:
dd if=/dev/zero of=/dev/shm/fill bs=1M count=4
recheck its usage - see 4M used:
shm 64.0M 4.0M 60.0M 6% /dev/shm
Then we add 8MB to /dev/shm - overwriting previous contents:
dd if=/dev/zero of=/dev/shm/fill bs=1M count=8
Killed
and this command gets killed.
check /dev/shm usage again.
shm 64.0M 5.4M 58.6M 9% /dev/shm
It used slightly over 4MB before container ran out of RAM.
Conclusion: Docker docker-compose memory limits get enforced.
By default containers have UNLIMITED ram usage available to them. Therefore use this resource limit to prevent your ram from being totally consumed by runaway containers.
Even if you do not know precisely what a good limit is set it anyway: 20, 50, 100 MB are all better than letting 240 GB be consumed.
2,599 posts | 762 followers
FollowAlibaba Clouder - January 24, 2019
Alibaba Clouder - January 24, 2019
Alibaba Clouder - January 24, 2019
Alibaba Clouder - January 25, 2019
Alibaba Clouder - July 24, 2020
Apache Flink Community China - April 23, 2020
2,599 posts | 762 followers
FollowLearn More
A secure image hosting platform providing containerized image lifecycle management
Learn MoreElastic and secure virtual cloud servers to cater all your cloud hosting needs.
Learn MoreMore Posts by Alibaba Clouder