By Alexandru Andrei, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud's incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.
Containers encapsulate applications and/or operating systems, running them side by side on the same host. There are multiple benefits to this type of architecture. Let's take a real-world example. For many years, the web-hosting business has been plagued by various problems due to lack of isolation between customer services. The sites (files and databases) of tens and sometimes hundreds of clients, was stored on the same operating system and a single application, such as Apache, had access to all those objects. This meant that if one customer had a vulnerable site and an attacker managed to exploit Apache through that security hole, then the attacker could subsequently access the files of every other customer on that server.
Another huge problem was allocation of resources. If a customer had a web application that was buggy or poorly optimized, it could slow down the whole server, resulting in poor performance for the other users. In some cases it could even crash the whole machine, bringing all the websites offline. Most of the industry has shifted away from these kinds of problems by using some form of containerization. When a customer's data and applications are isolated into his own container, he cannot consume more resources than allocated and if his applications crash, it won't affect the other people using that server (host).
Furthermore, if an attacker manages to break in by exploiting a service running in the customer's container, his access will be limited to the resources within that environment, the other containers are inaccessible to him. It is not impossible to break the defenses of a container, but it is much harder to do so, especially in those designed with security in mind.
We can then draw the conclusion that the most useful feature of containers is the isolation they provide, in both terms of logically separating resources from each other and security. Another advantage they offer is portability, allowing users to easily move or copy objects from server to server, even if they run different distributions of Linux. This helps speed up development, deployment and distribution of software since developers do not have to program their applications to support multiple operating systems, such as Ubuntu, Debian, Red Hat, etc.
A popular application used to contain and make services portable is Docker. It is designed to isolate single applications and excels at it. Although there are workarounds that make it possible to squeeze more programs into a Docker box, there's no reason to bend a tool designed for one purpose, to do something else. That's where LXD comes in, when instead of packing a single application, we need to contain an entire (Linux based) operating system. LXD uses and manages LXC containers and is similar to a virtual machine hypervisor, like QEMU, Xen or VirtualBox but much more lightweight and also slightly faster, since it doesn't actually virtualize hardware, it just contains/isolates a group of processes from the host system.
In this tutorial we will install and configure LXD on an Alibaba Cloud Elastic Compute Service (ECS) instance and learn how to use the command line to create and manage containers.
After you login to the ECS Console, create a new instance and choose Ubuntu 16.04 as your Linux distribution. An instance with 1GB of RAM will suffice if you just want to test and learn but if you'll want to use this in production, you will need 2GB or more.
After you configure and launch your instance, connect to it with an SSH client and log in as root.
Ubuntu 16.04 includes LXD version 2.0 while Ubuntu 18.04 includes the newer 3.0 version which has many useful additions such as clustering support, MAAS integration, physical machine to container migration, a few extra command line options, recursive directory transfers to/from containers, and a more straightforward configuration process.
Until Ubuntu 18.04 matures and is included in Alibaba Cloud's library of operating system images, there are two ways we can install the newer LXD:
do-release-upgrade
utility.In this tutorial we will choose the first method since it's less confusing for beginners. If however, you do want to upgrade your Ubuntu distribution, you should follow the instructions that have been included at the end of this tutorial, before continuing. When you come back, remember to skip anything related to the backport repository, since you won't need it anymore. That means skipping the next step in the following section and typing apt install lxd
instead of apt -t=xenial-backports install lxd
.
Configure the package manager to include the backports repository:
echo 'deb http://mirrors.cloud.aliyuncs.com/ubuntu/ xenial-backports main restricted universe multiverse' > /etc/apt/sources.list.d/xenial-backports.list
Update the package manager information and upgrade all packages on the Linux instance:
apt update && apt -y upgrade
When important system packages, such as the Linux kernel or libraries have been upgraded, a reboot is required to reload them and apply bug fixes or security patches. To restart the operating system:
systemctl reboot
Wait 30-60 seconds until the system has time to reboot and then log back in as root.
ZFS is a file system and volume manager that packs very useful features such as mirroring data, snapshotting, cloning, and even self-healing in certain setups. It makes more efficient use of storage devices and can reduce disk space required by containers. It also speeds up some operations. For example, in ZFS, if we have a 1GB file, make a second copy of it, and then modify just a few bytes, only the differences are stored, so the two files will require just 1GB of storage space, plus the few bytes we modified in the cloned file. Since LXD containers often include the same base images of operating systems, on top of which some programs and files are added, it will save us a lot of disk space. Also, the ability to take snapshots of containers allows us to easily rollback changes when something goes wrong.
Let's install the ZFS utilities:
apt -y install zfsutils-linux
It's useful to mention a common misconception that arises when ZFS is used. Memory claimed by ZFS to cache accessed files isn't reported as evictable (memory that it can free up at anytime when programs need it). But under memory pressure, ZFS will free up caches so that our applications can get what they require. This means that we may notice at some points, in an utility like htop
, that 50%+ of system memory is used up for no apparent reason (the memory used by running applications doesn't add up). We can see how much memory ZFS is using, in bytes, with the following command: cat /proc/spl/kstat/zfs/arcstats | grep '^size'
.
Install LXD from backports:
apt -t=xenial-backports install lxd
Enter the following command to begin the process of configuring LXD:
lxd init
A series of questions appears and we can press enter at each prompt to select the defaults, or type "yes" or "no" to customize settings. Let's go through each option so we can make informed choices.
Would you like to use LXD clustering? (yes/no) [default=no]:
LXD 3.0 brought native support for clusters. Usually a feature used by enterprises and other large deployments, this allows us to distribute containers across multiple host systems (servers). The main advantages are: automatic load-balancing of container distribution and ability to control and inspect hundreds of hosts and containers from one central point (no need to log in to each machine). In a way, this lets us create our own cloud. Suitable to be used in building so-called high-availability setups, a construct of two or more instances, where if one fails, another one automatically takes over, so there is no disruption in the services offered. It's common in such constructs to also distribute workload evenly on each node. If we ever launch a service that needs to scale, we would choose this, as it lets us easily grow the infrastructure, by just adding more nodes to the cluster when the need arises. In this tutorial we will choose the default answer, which is "no".
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Press ENTER to choose the default answer. This will be used to store container data.
Name of the new storage pool [default=default]:
We can choose whatever we like here, we'll call it "lxdpool" in this tutorial.
Name of the storage backend to use (dir, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Since ZFS was designed with reliability as its main focus, it is usually configured to group (pool) multiple storage devices and store data redundantly. That's why they are called pools. Press ENTER to create a new ZFS pool.
Would you like to use an existing block device? (yes/no) [default=no]:
ZFS pools can be backed by whole disks, partitions or files. For small-scale production systems, simplicity and/or testing purposes, it's Ok to press ENTER here and choose the default answer, "no", which will configure ZFS to store its pool data in a file. On larger-scale production systems though, it can be more reliable to take a different approach. When you create your Alibaba Cloud ECS instance, you should follow the instructions to add an optional data disk. Don't forget to also attach it to the running ECS. After you SSH into your instance, you can take a look at attached storage by entering this command: lsblk
. "vda" is your system disk, on which the operating system is stored, "vdb" is the first optional disk you have attached, "vdc" would be the third and so on. "vda1" is the first partition on the first disk. Answering "yes" to the LXD initialization question in this step will present you with an optional step: Path to the existing block device:. Here, you can type the path to your data disk: /dev/vdb
.
Size in GB of the new loop device (1GB minimum) [default=15GB]:
This is the size of the file to be created to back our ZFS pool. For most purposes, 15GB will suffice. A typical container initially requires between a few megabytes (Alpine Linux images are very small), 100MB (e.g. Ubuntu/Debian images) or more (Oracle and Plamo). Since we're using ZFS, we can clone a 100MB container a few times and require almost no additional space besides the initial 100MB. Basically, to calculate storage space necessity, we need to think about how many different base images (operating system images) we will use and what software and files we will add to those containers, that is not identical. Because if, for example, we need to add 1GB of data to a 100MB container, and we have to run 10 other instances with the same data within, we can create an image out of our 1100MB container, clone it 10 times and we will need no more than approximately 1100MB for all those containers. There is additional space required for metadata (data about data) but that is in the zone of a few megabytes, so it is negligible.
Would you like to connect to a MAAS server? (yes/no) [default=no]:
This can help with complex, cloud-like infrastructures. For example if we are using LXD clustering and also have a MAAS server available, we can make it keep track of our containers, and automatically assign them IP addresses and DNS names like container1.example.com. We will press ENTER here and choose "no" since we have no pre-configured MAAS server available.
Would you like to create a new network bridge? (yes/no) [default=yes]:
We will choose the default answer since network bridges are very useful, allowing containers to access the Internet, and each other, over the internal network. When we need to isolate network access we can choose "no" here.
What should the new bridge be called? [default=lxdbr0]:
Normally, we would only need to change the default answer here if we already have an active bridge with the same name.
What IPv4 address should be used? (CIDR subnet notation, "auto" or "none") [default=auto]:
We'll choose the default answer. This gives us the option to configure the internal IP addresses that will be assigned to the containers. For example entering 192.0.2.1/24
will assign container IPs between 192.0.2.1 and 192.0.2.255.
What IPv6 address should be used? (CIDR subnet notation, "auto" or "none") [default=auto]:
Same as above. If you don't intend to use IPv6, enter "none".
Would you like LXD to be available over the network? (yes/no) [default=no]:
We'll choose the default answer. Choosing "yes" would make LXD management available directly from authenticated remote locations, eliminating the need to open SSH connections to the server.
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
When base images for containers are updated on the image servers, this will automatically pull and cache the new version to our local server, so we can launch new containers faster in the future. It caches only images that we have used in the past, so for example if we used an Ubuntu 18.04 image last month, that image will always be kept up to date automatically.
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Useful if we need to use the current configuration as a template for other servers on which we'll install LXD (lxd init --preseed
command can be used and the text pasted there).
We interact with the LXD hypervisor and containers with the lxc
command. We can get a summary of what actions are possible by entering the following command:
lxc --help
To get more detailed information about a command, we can enter the name of that command, followed by --help
. Example:
lxc storage --help
It is also possible to get help on sub-commands, e.g.:
lxc storage volume --help
LXC works with base images that we can customize. It gets these from remote servers, and out-of-the box, comes pre-configured with three locations: ubuntu:
, ubuntu-daily:
and images:
. The first two provide various versions of minimal Ubuntu operating systems and images:
contains all of the other popular Linux distributions such as Debian, CentOS, Fedora, OpenSuse or Alpine. To list all of the available images in a repository:
lxc image list images:
Since there are many results displayed, we should filter by operating system:
lxc image list images: os=Debian
There are still a lot of redundant results, so let's filter further by only displaying images built for our processor architecture:
lxc image list images: os=Debian arch=amd64
After we find the image we require, we can download and launch it by using its alias name:
lxc launch images:debian/9
If an alias name is not available, we can also launch an instance by using the image fingerprint:
lxc launch images:e698b9e613bf
To keep track of our containers, it's useful to also choose a name for them, instead of letting the system pick a random name:
lxc launch images:debian/9 mythirdcontainer
Let's see all the containers from our system:
lxc list
Now let's begin customizing one container:
lxc shell mythirdcontainer
This opens the default shell of that container and gives us the ability to interact with the operating system encapsulated within. The command prompt should now display root@mythirdcontainer:~#
.
Let's configure our container as a simple web server:
apt -y install nginx
Now let's exit our container by closing the bash session:
exit
Let's see if we can connect to the container running the web server. First we need a web browser that can run in a terminal session:
apt -y install lynx
After we find out the internal IP address of the container with:
lxc list
Now we can visit the main page hosted on the container. Don't forget to replace the IP address given as an example here with the actual IP address of your container.
lynx 10.120.21.199
Quit the lynx browser by pressing "q" then "y".
Once we configure a container, we can use it as a template and clone it (we'll name it "clone"):
lxc copy mythirdcontainer clone
It won't automatically launch, so we need to do that manually:
lxc start clone
Now if we enter lxc list
to get the IP address of this cloned container, we will see that we can access it with lynx
since it already has an nginx server configured on it.
In a setup where multiple servers running LXD are connected, we can even copy containers between different hosts with lxc copy mythirdcontainer name_of_secondary_lxd_server:name_for_cloned_container
. But there is also another way that doesn't require configuring LXD instances to authenticate with each other.
We can create an image out of our previously customized container, but we need to stop it first. Although we can instruct LXD to stop it by issuing lxc stop mythirdcontainer
, sometimes it can take a long time to complete. A cleaner way to do this is:
lxc exec mythirdcontainer -- systemctl poweroff
Now we can create the image (we'll call it "myimage"):
lxc publish mythirdcontainer --alias myimage
And export it as a file (we'll name it "exportedimage.tar.gz"):
lxc image export myimage exportedimage
Afterwards, we can move this file to a different server, import it into LXD with lxc image import exportedimage.tar.gz --alias importedimage
and launch the container with lxc launch importedimage
.
When we run out of disk space or simply don't need a container anymore, we can delete it with lxc delete name_of_container
. A snapshot (described in the next section) can be deleted with lxc delete name_of_container/name_of_snapshot
, so in our case it would be lxc delete mythirdcontainer/snapshot1
. Images that have been copied locally can be viewed with lxc image list
and deleted with lxc image delete alias_or_fingerprint_of_image
. It's useful to know that even the first few characters of an image fingerprint suffice. So in the case of a fingerprint that is "02f59f96f808", we can type lxc image delete 02
.
Another useful feature of LXD containers is that we can take snapshots and revert to them when the necessity arises. We will first have to start our container once again:
lxc start mythirdcontainer
Now we can take a snapshot with:
lxc snapshot mythirdcontainer snapshot1
Let's simulate a disastrous action on the container. First get a shell:
lxc shell mythirdcontainer
Make sure you are within your shell, to avoid destroying your host system. The command prompt should look like this: root@mythirdcontainer:~#
. Issue the following command which will start deleting everything inside the container:
rm -rf --no-preserve-root /
Now exit the bash session within the container:
exit
If we now try to re-enter the container with lxc shell mythirdcontainer
we will see that it fails silently since there is no bash
executable anymore to host our shell session. Using lynx
to visit the web page hosted on the container will also fail with a 404 error (the nginx web server process is still running but it has no file to serve).
Restoring to our previous healthy state is very easy, and can be done live (without stopping and starting the container):
lxc restore mythirdcontainer snapshot1
Every deleted file and directory has been restored, almost instantly. Now we can enter a shell session or visit the web page hosted in the container.
When dealing with multiple containers it's almost certain we'll forget how we called our snapshots, so we need a command to get a list of snapshots available:
lxc info mythirdcontainer
This also shows us other useful information such as disk, CPU, memory usage and some other important metrics.
We will often need to transfer files between our host and our containers. We can do so with lxc file
commands. For example, to pull the /etc
directory from our container to our host, we would run:
lxc file pull mythirdcontainer/etc . --recursive
Note: this can take a long time to finish.
When we pull files instead of directories the --recursive
flag is not required. The .
in our command means "current directory".
The command to copy files from host to container is lxc file push
and arguments are reversed, so to push /etc
from our host to the container we would run:
lxc file push /etc/ mythirdcontainer/newdirectory --recursive --create-dirs
--create-dirs
is required when the destination directory doesn't exist.
The commands lxc file pull --help
and lxc file push --help
will show more details about the syntax of these commands.
Sometimes, we need to limit the amount of resources containers can use. For example, let's say we are a VPN provider and each client gets his own container. We can limit the network bandwidth so that each client gets an equal and fair share. And we can increase that limit for customers that pay premium fees. Another reason to set limits can be to ensure that a problematic container doesn't use up all of the available system resources, leaving none available for the others.
To restrict the amount of memory a container can use:
lxc config set mythirdcontainer limits.memory 100MB
lxc exec mythirdcontainer -- free -h
will confirm that the limit has been set.
To limit the maximum CPU time allocated:
lxc config set mythirdcontainer limits.cpu.allowance 10ms/100ms
10 milliseconds per 100 milliseconds means the container will be able to use at most 10 percent of the host's CPU time.
Here, you can read more about https://blog.ubuntu.com/2016/03/30/lxd-2-0-resource-control-412">LXD resource control and look at a https://github.com/lxc/lxd/blob/master/doc/containers.md">comprehensive table of LXD container properties that can be set.
This is the optional section of the tutorial, for those that want to upgrade to Ubuntu 18.04 before installing LXD.
do-release-upgrade
If you get a message saying "No new release found", it means Ubuntu 18.04.1 hasn't been released yet. It is a minimal requirement, as 18.04.1 is considered to be the first stable release, but we can still upgrade to 18.04:
do-release-upgrade -d
Continue running under SSH?
This session appears to be running under ssh. It is not recommended
to perform a upgrade over ssh currently because in case of failure it
is harder to recover.
If you continue, an additional ssh daemon will be started at port
'1022'.
Do you want to continue?
Continue [yN]
Answer "y" and press ENTER.
To make recovery in case of failure easier, an additional sshd will
be started on port '1022'. If anything goes wrong with the running
ssh you can still connect to the additional one.
If you run a firewall, you may need to temporarily open this port. As
this is potentially dangerous it's not done automatically. You can
open the port with e.g.:
'iptables -I INPUT -p tcp --dport 1022 -j ACCEPT'
To continue please press [ENTER]
No valid mirror found
While scanning your repository information no mirror entry for the
upgrade was found. This can happen if you run an internal mirror or
if the mirror information is out of date.
Do you want to rewrite your 'sources.list' file anyway? If you choose
'Yes' here it will update all 'xenial' to 'bionic' entries.
If you select 'No' the upgrade will cancel.
Continue [yN]
This is a false alarm because Alibaba Cloud ECS instances are configured to use internal network mirrors, to speed up downloads and help users save on external network traffic costs. Answer with "y".
Configuration file '/etc/sysctl.conf'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** sysctl.conf (Y/I/N/O/D/Z) [default=N] ?
Answer "n" here.
Configuration file '/etc/ntp.conf'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** ntp.conf (Y/I/N/O/D/Z) [default=N] ?
Here you can answer with "y".
In both cases you should choose to "keep the local version currently installed".
Remove obsolete packages?
60 packages are going to be removed.
Continue [yN] Details [d]
it is safe to answer "y".
System upgrade is complete.
Restart required
To finish the upgrade, a restart is required.
If you select 'y' the system will be restarted.
Continue [yN]
To learn more about LXD you can visit https://lxd.readthedocs.io/en/latest/.
Dynamically Increase Storage Capacity with LVM (Without Rebooting)
2,599 posts | 764 followers
FollowAlibaba Clouder - August 22, 2019
Alibaba Cloud Blockchain Service Team - October 25, 2018
Alibaba Clouder - February 13, 2018
Alibaba Clouder - July 2, 2019
Alibaba Clouder - October 18, 2018
Alibaba Clouder - October 19, 2018
2,599 posts | 764 followers
FollowElastic and secure virtual cloud servers to cater all your cloud hosting needs.
Learn MoreLearn More
Simplify the Operations and Management (O&M) of your computing resources
Learn MoreMore Posts by Alibaba Clouder