The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.
Difference between revisions of "LXD"
(removing systemd steps, no longer required.) |
|||
Line 262: | Line 262: | ||
}} | }} | ||
<translate> | <translate> | ||
== [[Special:MyLanguage/LXD/LXD in LXD|PART X - LXD in LXD]] == <!--T:102--> | == [[Special:MyLanguage/LXD/LXD in LXD|PART X - LXD in LXD]] == <!--T:102--> |
Revision as of 22:50, October 21, 2019
Introduction
LXD is a container "hypervisor" designed to provide an easy set of tools to manage Linux containers, and its development is currently being led by employees at Canonical. You can learn more about the project in general at https://linuxcontainers.org/lxd/ .
LXD is currently used for container infrastructure for Funtoo Containers and is also very well-supported under Funtoo Linux. For this reason, it's recommended that you check out LXD and see what it can do for you.
Basic Setup on Funtoo
The following steps will show you how to set up a basic LXD environment under Funtoo Linux. This environment will essentially use the default LXD setup -- a will be created called lxdbr0
which will use NAT to provide Internet access to your containers. In addition, a default storage pool will be created that will simply use your existing filesystem's storage, creating a directory at /var/lib/lxd/storage-pools/default
to store any containers you create. More sophisticated configurations are possible that use dedicated network bridges connected to physical interfaces without NAT, as well as dedicated storage pools that use ZFS and btrfs -- however, these types of configurations are generally overkill for a developer workstation and should only be attempted by advanced users. So we won't cover them here.
Requirements
This section will guide you through setting up the basic requirements for creating an LXD environment.
The first step is to emerge LXD and its dependencies. Perform the following:
root # emerge -a lxd
Once LXD is done emerging, we will want to enable it to start by default:
root # rc-update add lxd default
In addition, we will want to set up the following files. /etc/security/limits.conf
should be modified to have the following lines in it:
/etc/security/limits.conf
* soft nofile 1048576
* hard nofile 1048576
root soft nofile 1048576
root hard nofile 1048576
* soft memlock unlimited
* hard memlock unlimited
# End of file
In addition, we will want to map a set of user ids and group ids to the root user so they are available for its use. Do this by creating the /etc/subuid
and /etc/subgid
files with the following identical contents:
/etc/subuid
root:100000:1000000000
/etc/subgid
root:100000:1000000000
At this point we are ready to initialize and start LXD.
Initialization
To configure LXD, first we will need to start LXD. This can be done as follows:
root # /etc/init.d/lxd start
At this point, we can run lxd init
to run a configuration wizard to set up LXD:
root # lxd init Would you like to use LXD clustering? (yes/no) [default=no]: ↵ Do you want to configure a new storage pool? (yes/no) [default=yes]: ↵ Name of the new storage pool [default=default]: ↵ Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: dir ↵ Would you like to connect to a MAAS server? (yes/no) [default=no]: ↵ Would you like to create a new local network bridge? (yes/no) [default=yes]: ↵ What should the new bridge be called? [default=lxdbr0]: ↵ What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: ↵ What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none ↵ Would you like LXD to be available over the network? (yes/no) [default=no]: ↵ Would you like stale cached images to be updated automatically? (yes/no) [default=yes] ↵ Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: ↵ root #
As you can see, we chose all the default except for:
- storage pool
- We opted for using a directory-based container storage rather than btrfs volumes. Directory-based may be the default option during LXD configuration -- it depends if you have btrfs-tools installed or not.
- IPv6 address
- It is recommended you turn this off unless you are specifically wanting to play with IPv6 in your containers. It may cause
dhcpcd
in your container to only retrieve an IPv6 address if you leave it enabled. This is great if you have IPv6 working -- otherwise, you'll get a dud IPv6 address and no IPv4 address, and thus no network.
As explained above, turn off IPv6 NAT in LXD unless you specifically intend to use it! It can confuse dhcpcd
.
Now, we should be able to run lxc image list
and get a response from the LXD daemon:
root # lxc image list +-------+-------------+--------+-------------+------+------+-------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------+-------------+--------+-------------+------+------+-------------+ root #
If you are able to do this, you have successfully set up the core parts of LXD! Note that we used the command lxc
and not lxd
like we did for lxd init
-- from this point forward, you will use the lxc
command. Don't let this
confuse you -- the lxc
command is the primary command-line tool for working with LXD containers.
Above, you can see that no images are installed. Images are installable snapshots of containers that we can use to create new containers ourselves. So, as a first step, let's go ahead and grab an image we can use. You will want to browse https://build.funtoo.org for an LXD image that will work on your computer hardware. For example, I was able to download
the following file using wget
:
root # wget https://build.funtoo.org/1.3-release-std/x86-64bit/intel64-skylake/lxd-intel64-skylake-1.3-release-std-2019-06-11.tar.xz
Once downloaded, this image can be installed using the following command:
root # lxc image import lxd-intel64-skylake-1.3-release-std-2019-06-11.tar.xz --alias funtoo Image imported with fingerprint: fe4d27fb31bfaf3bd4f470e0ea43d26a6c05991de2a504b9e0a3b1a266dddc69
Now you will see the image available in our image list:
root # lxc image list +--------+--------------+--------+--------------------------------------------+--------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +--------+--------------+--------+--------------------------------------------+--------+----------+------------------------------+ | funtoo | fe4d27fb31bf | no | 1.3 Release Skylake 64bit [std] 2019-06-14 | x86_64 | 279.35MB | Jun 15, 2019 at 3:09am (UTC) | +--------+--------------+--------+--------------------------------------------+--------+----------+------------------------------+ root #
First Container
It is now time to launch our first container. This can be done as follows:
root # lxc launch funtoo testcontainer Creating testcontainer Starting testcontainer
We can now see the container running via lxc list
:
root # lxc list +---------------+---------+------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------+---------+------+-----------------------------------------------+------------+-----------+ | testcontainer | RUNNING | | fd42:8063:81cb:988c:216:3eff:fe2a:f901 (eth0) | PERSISTENT | | +---------------+---------+------+-----------------------------------------------+------------+-----------+ root #
By default, our new container testcontainer
will use the default profile, which will connect an eth0
interface in the container to NAT, and will also use our directory-based LXD storage pool. We can now enter the container as follows:
root # lxc exec testcontainer -- su --login testcontainer #
As you might have noticed, we do not yet have any IPv4 networking configured. While LXD has set up a bridge and NAT for us, along with a DHCP server to query, we actually need to use dhcpcd
to query for an IP address, so let's get that set up:
testcontainer # echo "template=dhcpcd" > /etc/conf.d/netif.eth0 testcontainer # cd /etc/init.d testcontainer # ln -s netif.tmpl netif.eth0 testcontainer # rc-update add netif.eth0 default * service netif.eth0 added to runlevel default testcontainer # rc * rc is deprecated, please use openrc instead. * Caching service dependencies ... [ ok ] * Starting DHCP Client Daemon ... [ ok ] * Network dhcpcd eth0 up ... [ ok ] testcontainer #
You can now see that eth0
has a valid IPv4 address:
testcontainer # ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.212.194.17 netmask 255.255.255.0 broadcast 10.212.194.255 inet6 fd42:8063:81cb:988c:25ea:b5bd:603d:8b0d prefixlen 64 scopeid 0x0<global> inet6 fe80::216:3eff:fe2a:f901 prefixlen 64 scopeid 0x20<link> ether 00:16:3e:2a:f9:01 txqueuelen 1000 (Ethernet) RX packets 45 bytes 5385 (5.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20 bytes 2232 (2.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
What happened is that LXD set up a DHCP server for us (dnsmasq) running on our private container network, and automatically offers IP addresses to our containers. It also configured iptables for us to NAT the connection so that outbound Internet access should magically work.
You should also be able to see this IPv4 address listed in the container list when you type lxc list
on your host system.
Network Troubleshooting
Note that if you are having issues with your container getting an IPv4 address via DHCP, make sure that you turn IPv6 off in LXD. Do this by running:
root # lxd network lxdbr0 edit
Then, change ipv6.nat
to "false"
and restart lxd and the container:
root # /etc/init.d/lxd restart root # lxc restart testcontainer
This should resolve the issue.
Time to have some fun!
testcontainer # ego sync
PART X - LXD in LXD
PART Y - Docker in LXD
PART Z - LXD FAQ
List of tested and working images
These are images from the https://images.linuxcontainers.org repository available by default in lxd. You can list all available images by typing following command (beware the list is very long):
root # lxc image list images: +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.3 (3 more) | ef69c8dc37f6 | yes | Alpine 3.3 amd64 (20171018_17:50) | x86_64 | 2.00MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.3/armhf (1 more) | 5ce4c80edcf3 | yes | Alpine 3.3 armhf (20170103_17:50) | armv7l | 1.53MB | Jan 3, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.3/i386 (1 more) | cd1700cb7c97 | yes | Alpine 3.3 i386 (20171018_17:50) | i686 | 1.84MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.4 (3 more) | bd4f1ccfabb5 | yes | Alpine 3.4 amd64 (20171018_17:50) | x86_64 | 2.04MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.4/armhf (1 more) | 9fe7c201924c | yes | Alpine 3.4 armhf (20170111_20:27) | armv7l | 1.58MB | Jan 11, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.4/i386 (1 more) | 188a31315773 | yes | Alpine 3.4 i386 (20171018_17:50) | i686 | 1.88MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.5 (3 more) | 63bebc672163 | yes | Alpine 3.5 amd64 (20171018_17:50) | x86_64 | 1.70MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.5/i386 (1 more) | 48045e297515 | yes | Alpine 3.5 i386 (20171018_17:50) | i686 | 1.73MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ ... +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | | fd95a7a754a0 | yes | Alpine 3.5 amd64 (20171016_17:50) | x86_64 | 1.70MB | Oct 16, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | | fef66668f5a2 | yes | Debian stretch arm64 (20171016_22:42) | aarch64 | 96.56MB | Oct 16, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | | ff18aa2c11d7 | yes | Opensuse 42.3 amd64 (20171017_00:53) | x86_64 | 58.92MB | Oct 17, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | | ff4ef0d824b6 | yes | Ubuntu zesty s390x (20171017_03:49) | s390x | 86.88MB | Oct 17, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
These are the images that are known to work with current LXD setup on Funtoo Linux:
Image | Init | Status |
---|---|---|
CentOS 7 | systemd | Working |
Debian Jessie (8) - EOL April/May 2020 | systemd | Working (systemd - no failed units) |
Debian Stretch (9) - EOL June 2022 | systemd | Working |
Fedora 26 | systemd with cgroup v2 | Not Working |
Fedora 25 | systemd | Working |
Fedora 24 | systemd | Working |
Oracle 7 | systemd | Working (systemd - no failed units) |
OpenSUSE 42.2 | systemd | Working |
OpenSUSE 42.3 | systemd | Working |
Ubuntu Xenial (16.04 LTS) - EOL 2021-04 | systemd | Working |
Ubuntu Zesty (17.04) - EOL 2018-01 | systemd | Working |
Alpine 3.3 | OpenRC | Working |
Alpine 3.4 | OpenRC | Working |
Alpine 3.5 | OpenRC | Working |
Alpine 3.6 | OpenRC | Working |
Alpine Edge | OpenRC | Working |
Archlinux | systemd with cgroup v2 | Not Working |
CentOS 6 | upstart | Working (systemd - no failed units) |
Debian Buster | systemd with cgroup v2 | Not Working |
Debian Sid | systemd with cgroup v2 | Not working |
Debian Wheezy (7) - EOL May 2018 | ? | ? (more testing needed) |
Gentoo | OpenRC | Working (all services started) |
Oracle 6 | upstart | ? (mount outputs nothing) |
Plamo 5 | ? | ? |
Plamo 6 | ? | ? |
Sabayon | systemd with cgroup v2 | Not Working |
Ubuntu Artful (17.10) - EOL 2018-07 | systemd with cgroup v2 | Not Working |
Ubuntu Core 16 | ? | ? |
Ubuntu Trusty (14.04 LTS) - EOL 2019-04 | upstart | Working |