The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.
Difference between revisions of "LXD"
Line 432: | Line 432: | ||
{{console|body= | {{console|body= | ||
###i## lxc ls | ###i## lxc ls | ||
<nowiki>+------+---------+----------------------+-----------------------------------------------+------------+-----------+ | <nowiki>+-------+---------+-----------------------+------------------------------------------------+------------+-----------+ | ||
| NAME | STATE | IPV4 | | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | | ||
+------+---------+----------------------+-----------------------------------------------+------------+-----------+ | +-------+---------+-----------------------+------------------------------------------------+------------+-----------+ | ||
| | | fun-1 | RUNNING | 10.214.101.187 (eth0) | fd42:156d:4593:a619:a5ad:edaf:7270:e6c4 (eth0) | PERSISTENT | 0 | | ||
| | | | | | fd42:156d:4593:a619:216:3eff:fef7:c1c2 (eth0) | | | | ||
+------+---------+----------------------+-----------------------------------------------+------------+-----------+ | +-------+---------+-----------------------+------------------------------------------------+------------+-----------+ | ||
</nowiki>}} | </nowiki>}} | ||
Revision as of 16:06, December 18, 2017
LXD is a container "hypervisor" it should provide user with a new and fresh experience using LXC technology. {{#layout:doc}}
LXD consists of three components:
- A system-wide daemon (lxd)
- A command line client (lxc)
- An OpenStack Nova plugin (nova-compute-lxd)
A REST API that is accesible both locally and if enabled, over the network is provided from the lxd daemon.
The command line tool is designed to be a very simple, yet very powerful tool to manage all your containers. It can handle connections to multiple container hosts and easily give you an overview of all the containers on your network, let you create some more where you want them and even move them around while they're running.
The OpenStack plugin then allows you to use your lxd hosts as compute nodes, running workloads on containers rather than virtual machines.
The LXD project was founded and is currently led by Canonical Ltd and Ubuntu with contributions from a range of other companies and individual contributors.
Features
Some of the biggest features of LXD are:
- Secure by design (unprivileged containers, resource restrictions and much more)
- Scalable (from containers on your laptop to thousand of compute nodes)
- Intuitive (simple, clear API and crisp command line experience)
- Image based (no more distribution templates, only good, trusted images)
- Live migration
Unprivileged Containers
LXD uses unprivileged containers by default. The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).
The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.
The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.
From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.
LXD does offer a number of options related to unprivileged configuration:
- Increasing the size of the default uid/gid map
- Setting up per-container maps
- Punching holes into the map to expose host users and groups
Relationship with LXC
LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.
It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.
Licensing
LXD is free software and is developed under the Apache 2 license.
Installing LXD in Funtoo
Kernel pre-requisities
These options should be disabled in your kernel to use all of the functions of LXD:
GRKERNSEC_CHROOT_CAPS
GRKERNSEC_CHROOT_CHMOD
GRKERNSEC_CHROOT_DOUBLE
GRKERNSEC_CHROOT_MOUNT
GRKERNSEC_CHROOT_PIVOT
GRKERNSEC_PROC
GRKERNSEC_SYSFS_RESTRICT
NETPRIO_CGROUP
These options should be enabled in your kernel to use all of the functions of LXD:
BRIDGE
CGROUP_CPUACCT
CGROUP_DEVICE
CGROUP_FREEZER
CGROUP_SCHED
CGROUPS
CHECKPOINT_RESTORE
CPUSETS
DUMMY
EPOLL
EVENTFD
FHANDLE
IA32_EMULATION
INET_DIAG
INET_TCP_DIAG
INET_UDP_DIAG
INOTIFY_USER
IP_NF_NAT
IP_NF_TARGET_MASQUERADE
IP6_NF_NAT
IP6_NF_TARGET_MASQUERADE
IPC_NS
IPV6
MACVLAN
NAMESPACES
NET_IPGRE
NET_IPGRE_DEMUX
NET_IPIP
NET_NS
NETFILTER_XT_MATCH_COMMENT
NETLINK_DIAG
NF_NAT_MASQUERADE_IPV4
NF_NAT_MASQUERADE_IPV6
PACKET_DIAG
PID_NS
POSIX_MQUEUE
UNIX_DIAG
USER_NS
UTS_NS
VETH
VXLAN
The Funtoo's default kernel (sys-kernel/debian-sources – v. 4.11.11 at the time of writing) has all these options enabled.
On older kernels DEVPTS_MULTIPLE_INSTANCES
is needed too (as of kernel version 4.11.11 - the option doesn't exist any more)
LXC package comes with an utility to check all needed config options.
root # CONFIG=/path/to/config /usr/bin/lxc-checkconfig
You can also use this code to compare your config settings with the ones needed. Put the required config options in a kernel-req.txt file and run the script.
kerncheck.py
(python source code) - check kernel optionsimport gzip
REQF = "kernel-req.txt" # copy kernel options requirements into this file
REQS = set()
CFGS = set()
with open(REQF) as f:
for line in f:
REQS.add("CONFIG_%s" % line.strip())
with gzip.open("/proc/config.gz") as f:
for line in f:
line = line.decode().strip()
if not line or line.startswith("#"):
continue
try:
[opt, val] = line.split("=")
if val =="n":
continue
CFGS.add(opt)
except:
pass
print("Enabled config options:")
print(CFGS & REQS)
print("Missing config options:")
print(REQS - CFGS)
Installing LXD
Installing LXD is pretty straight forward as the ebuild exists in our portage tree. I would recommend putting /var on btrfs or zfs (or at least /var/lib/lxd) as LXD can take advantage of these COW filesytems. LXD doesn’t need any configuration to use btrfs, you just need to make sure that /var/lib/lxd is stored on a btrfs filesystem and LXD will automatically make use of it for you. You can use any other filesystem, but be advised LXD can take great advantage of btrfs or ZFS, be it for snapshots, clones, quotas and more. If you want to test it on your current filesystem consider creating a loop device that you format with btrfs and use that as your /var/lib/lxd device.
There are couple of major versions of LXD/LXC.
- LXC
- LXC 1.0 (LXC upstream strongly recommends 1.0 users to upgrade to the 2.0 LTS release. Not supported by Funtoo.)
- LXC 2.0 LTS (supported until June 2021) - latest version 2.0.9
- LXC 2.1 (supported for a year from release announcement on 5th of September 2017 - so until September 2018) - latest version 2.1.1
- LXD
- LXD 2.0 LTS (supported until June 2021) - latest 2.0.11
- LXD 2.x - latest 2.20
- LXCFS
- LXCFS 2.0 LTS (supported until June 2021) - latest 2.0.8
Install LXD by:
root # emerge -av lxd
First setup of LXD/Initialisation
Before using LXD for the first time as a user, you should initialize your LXD environment. As stated earlier btrfs (or zfs) is recommended as your storage filesystem.
root # service lxd start * Starting lxd server ... root # lxd init Do you want to configure a new storage pool (yes/no) [default=yes]? yes Name of the new storage pool [default=default]: default Name of the storage backend to use (dir, btrfs, lvm) [default=dir]: btrfs Create a new BTRFS pool (yes/no) [default=yes]? yes Would you like to use an existing block device (yes/no) [default=no]? no Would you like to create a new subvolume for the BTRFS storage pool (yes/no) [default=yes]: yes Would you like LXD to be available over the network (yes/no) [default=no]? no Would you like stale cached images to be updated automatically (yes/no) [default=yes]? no Would you like to create a new network bridge (yes/no) [default=yes]? yes What should the new bridge be called [default=lxdbr0]? lxdbr0 What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? auto What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? auto LXD has been successfully configured.
What this does is it creates btrfs subvolumes like this:
user $ btrfs sub list . ID 260 gen 1047 top level 5 path rootfs ID 280 gen 1046 top level 260 path var/lib/lxd/storage-pools/default ID 281 gen 1043 top level 280 path var/lib/lxd/storage-pools/default/containers ID 282 gen 1044 top level 280 path var/lib/lxd/storage-pools/default/snapshots ID 283 gen 1045 top level 280 path var/lib/lxd/storage-pools/default/images ID 284 gen 1046 top level 280 path var/lib/lxd/storage-pools/default/custom
It also creates new network interface for you:
user $ ip a list dev lxdbr0 8: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether d2:9b:70:f2:8f:6f brd ff:ff:ff:ff:ff:ff inet 10.250.237.1/24 scope global lxdbr0 valid_lft forever preferred_lft forever inet 169.254.59.23/16 brd 169.254.255.255 scope global lxdbr0 valid_lft forever preferred_lft forever inet6 fd42:efd8:662e:3184::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::caf5:b7ed:445e:b112/64 scope link valid_lft forever preferred_lft forever
And last but not least it also generates iptables rules for you:
user $ iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:domain /* generated for LXD network lxdbr0 */ ACCEPT udp -- anywhere anywhere udp dpt:domain /* generated for LXD network lxdbr0 */ ACCEPT udp -- anywhere anywhere udp dpt:bootps /* generated for LXD network lxdbr0 */ Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* generated for LXD network lxdbr0 */ ACCEPT all -- anywhere anywhere /* generated for LXD network lxdbr0 */ Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:domain /* generated for LXD network lxdbr0 */ ACCEPT udp -- anywhere anywhere udp spt:domain /* generated for LXD network lxdbr0 */ ACCEPT udp -- anywhere anywhere udp spt:bootps /* generated for LXD network lxdbr0 */ user $ iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 10.250.237.0/24 !10.250.237.0/24 /* generated for LXD network lxdbr0 */ user $ iptables -L -t mangle Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination CHECKSUM udp -- anywhere anywhere udp dpt:bootpc /* generated for LXD network lxdbr0 */ CHECKSUM fill
Some other things done by the initialization and starting of the LXD daemon are:
- dnsmasq listening on lxdbr0
- ...
Finishing up the setup of LXD
There are still some things that you need to do manually. We need to setup subuid and subgid values for our containers to use. And for using non-systemd containers we will also need app-admin/cgmanager so emerge and start it now.
root # rc-update add lxd default root # rc-update add lxcfs default root # usermod --add-subuids 100000-165535 root root # usermod --add-subgids 100000-165535 root root # service lxd restart root # rc
LXD restart is needed to inform the daemon of the uid/gid changes.
Containers, snapshots and images
Containers in LXD are made of:
- A filesystem (rootfs)
- A list of configuration options, including resource limits, environment, security options and more
- A bunch of devices like disks, character/block unix devices and network interfaces
- A set of profiles the container inherits configuration from (see below)
- Some properties (container architecture, ephemeral or persistent and the name)
- Some runtime state (when using CRIU for checkpoint/restore)
Container snapshots as the name states snapshots of the container in time and cannot be modified in any way. It is worth noting that because snapshots can store the container runtime state, which gives us ability of “stateful” snapshots. That is, the ability to rollback the container including its cpu and memory state at the time of the snapshot.
LXD is image based, all LXD containers come from an image. Images are typically clean Linux distribution images similar to what you would use for a virtual machine or cloud instance. It is possible to “publish” a container, making an image from it which can then be used by the local or remote LXD hosts.
Our first image
Let's get our hands even more dirty and create our first image. We will be using a generic 64 bit Funtoo Linux image.
The Funtoo's default build host doesn't build LXD stages, yet.
Grab the image here (or pick the subarch that you want): http://build.liguros.net/funtoo-current/x86-64bit/generic_64/lxd-latest.tar.xz
Grab also the hash file: http://build.liguros.net/funtoo-current/x86-64bit/generic_64/lxd-latest.tar.xz.hash.txt
Check the hash of the downloaded file against the one from server. Proceed if they match.
Import the image
After we have successfully downloaded the archive we can now finally import it into LXD and start using it as our "seed" image for all our containers.
root # lxc image import lxd-latest.tar.xz --alias funtoo Image imported with fingerprint: 6c2ca3af0222d503656f5a1838885f1b9b6aed2c1994f1d7ef94e2efcb7233c4 root # lxc image ls +--------+--------------+--------+------------------------------------+--------+----------+-----------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +--------+--------------+--------+------------------------------------+--------+----------+-----------------------------+ | funtoo | 6c2ca3af0222 | no | Funtoo Current Generic Pure 64-bit | x86_64 |227.99MB | Dec 13, 2017 at 11:01pm (UTC) | +--------+--------------+--------+------------------------------------+--------+----------+-----------------------------+
And there we have our very first Funtoo Linux image imported inside LXD. You can reference the image through the alias or through the fingerprint. Aliases can be added also later.
Let me show you some basic usage then.
Creating your first container
So now we can launch our first container. That is done using this command:
root # lxc launch funtoo fun-1 Creating fun-1 Starting fun-1 root # lxc ls +-------+---------+------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------+---------+------+-----------------------------------------------+------------+-----------+ | fun-1 | RUNNING | | fd42:156d:4593:a619:216:3eff:fef7:c1c2 (eth0) | PERSISTENT | 0 | +-------+---------+------+-----------------------------------------------+------------+-----------+
lxc launch is a shortcut for lxc init and lxc start, lxc init creates the container without starting it.
Profiles intermezzo
LXD has the ability to change quite a few container settings, including resource limitation, control of container startup and a variety of device pass-through options using what is called profiles. Multiple profiles can be applied to a single container, and the last profile overrides the other ones it the resources being configured is the same for multiple profiles. Let me show you how can this be used.
This is the default profile that gets inherited by all containers.
root # lxc profile list +---------+---------+ | NAME | USED BY | +---------+---------+ | default | 1 | +---------+---------+ root # lxc profile show default config: {} description: Default LXD profile devices: eth0: nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: default used_by: - /1.0/containers/fun-1
Now let's edit this profile for our funtoo containers. It will include some useful stuff.
root # lxc profile set default raw.lxc "lxc.mount.entry = none dev/shm tmpfs rw,nosuid,nodev,create=dir" root # lxc profile set default environment.LANG "en_US.UTF-8" root # lxc profile set default environment.LC_ALL "en_US.UTF-8" root # lxc profile set default environment.LC_COLLATE "POSIX"
Profiles can store any configuration that a container can (key/value or devices) and any number of profiles can be applied to a container. Profiles are applied in the order they are specified so the last profile to specify a specific key wins. In any case, resource-specific configuration always overrides that coming from the profiles.
The default profile is set for any new container created which doesn't specify a different profiles list.
Using our first container
After we have done all these customizations we can now start using our container. The next command will give us shell inside the container.
root # lxc exec fun-1 bash
Now you should see a different prompt starting with
fun-1 ~ #
If we run top or ps for example we will see only the processes of the container.
fun-1 ~ # ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 4248 748 ? Ss+ 13:20 0:00 init [3] root 266 0.0 0.0 30488 472 ? Ss 13:20 0:00 /usr/sbin/sshd root 312 0.2 0.0 17996 3416 ? Ss 13:29 0:00 bash root 317 0.0 0.0 19200 2260 ? R+ 13:29 0:00 ps aux
As you can see only the container's processes are shown. User running the processes is root here. What happens if we search for all sshd processes for example on the host box?
root # ps aux|grep ssh root 14505 0.0 0.0 30564 1508 ? Ss Sep07 0:00 /usr/sbin/sshd 100000 25863 0.0 0.0 30488 472 ? Ss 15:20 0:00 /usr/sbin/sshd root 29487 0.0 0.0 8324 828 pts/2 S+ 15:30 0:00 grep --colour=auto sshd root #
So as you can see, the sshd process is running under user with uid 100000 on the host machine and has a different PID.
Basic actions with containers
Listing containers
root # lxc ls +-------+---------+-----------------------+------------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------+---------+-----------------------+------------------------------------------------+------------+-----------+ | fun-1 | RUNNING | 10.214.101.187 (eth0) | fd42:156d:4593:a619:a5ad:edaf:7270:e6c4 (eth0) | PERSISTENT | 0 | | | | | fd42:156d:4593:a619:216:3eff:fef7:c1c2 (eth0) | | | +-------+---------+-----------------------+------------------------------------------------+------------+-----------+
Container details
root # lxc info c1 Name: c1 Remote: unix:// Architecture: x86_64 Created: 2017/09/08 02:07 UTC Status: Running Type: persistent Profiles: default, prf-funtoo Pid: 6366 Ips: eth0: inet 10.214.101.79 vethFG4HXG eth0: inet6 fd42:156d:4593:a619:8619:546e:43f:2089 vethFG4HXG eth0: inet6 fd42:156d:4593:a619:216:3eff:fe4a:3d4f vethFG4HXG eth0: inet6 fe80::216:3eff:fe4a:3d4f vethFG4HXG lo: inet 127.0.0.1 lo: inet6 ::1 Resources: Processes: 6 CPU usage: CPU usage (in seconds): 25 Memory usage: Memory (current): 69.01MB Memory (peak): 258.92MB Network usage: eth0: Bytes received: 83.65kB Bytes sent: 9.44kB Packets received: 188 Packets sent: 93 lo: Bytes received: 0B Bytes sent: 0B Packets received: 0 Packets sent: 0
Container configuration
root # lxc config edit c1 root ### This is a yaml representation of the configuration. root ### Any line starting with a '# will be ignored. root ### root ### A sample configuration looks like: root ### name: container1 root ### profiles: root ### - default root ### config: root ### volatile.eth0.hwaddr: 00:16:3e:e9:f8:7f root ### devices: root ### homedir: root ### path: /extra root ### source: /home/user root ### type: disk root ### ephemeral: false root ### root ### Note that the name is shown but cannot be changed architecture: x86_64 config: image.architecture: x86_64 image.description: Funtoo Current Generic Pure 64-bit image.name: funtoo-generic_64-pure64-funtoo-current-2016-12-10 image.os: funtoo image.release: "1.0" image.variant: current volatile.base_image: e279c16d1a801b2bd1698df95e148e0a968846835f4769b24988f2eb3700100f volatile.eth0.hwaddr: 00:16:3e:4a:3d:4f volatile.eth0.name: eth0 volatile.idmap.base: "0" volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]' volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]' volatile.last_state.power: RUNNING devices: {} ephemeral: false profiles: - default - prf-funtoo stateful: false description: ""
One can also add environment variables.
root # lxc config set <container> environment.LANG en_US.UTF-8 root # lxc config set <container> environment.LC_COLLATE POSIX
Managing files
Snapshots
Cloning, copying and moving containers
Resource control
LXD offers a variety of resource limits. Some of those are tied to the container itself, like memory quotas, CPU limits and I/O priorities. Some are tied to a particular device instead, like I/O bandwidth or disk usage limits.
As with all LXD configuration, resource limits can be dynamically changed while the container is running. Some may fail to apply, for example if setting a memory value smaller than the current memory usage, but LXD will try anyway and report back on failure.
All limits can also be inherited through profiles in which case each affected container will be constrained by that limit. That is, if you set limits.memory=256MB in the default profile, every container using the default profile (typically all of them) will have a memory limit of 256MB.
Disk
Setting a size limit on the container’s filesystem and have it enforced against the container. Right now LXD only supports disk limits if you’re using the ZFS or btrfs storage backend.
To set a disk limit (requires btrfs or ZFS):
root # lxc config device set c1 root size 20GB
CPU
To just limit a container to any 2 CPUs, do:
root # lxc config set c1 limits.cpu 2
To pin to specific CPU cores, say the second and fourth:
root # lxc config set c1 limits.cpu 1,3
More complex pinning ranges like this works too:
root # lxc config set c1 limits.cpu 0-3,7-11
Memory
To apply a straightforward memory limit run:
root # lxc config set c1 limits.memory 256MB
(The supported suffixes are kB, MB, GB, TB, PB and EB)
To turn swap off for the container (defaults to enabled):
root # lxc config set c1 limits.memory.swap false
To tell the kernel to swap this container’s memory first:
root # lxc config set c1 limits.memory.swap.priority 0
And finally if you don’t want hard memory limit enforcement:
root # lxc config set c1 limits.memory.enforce soft
Network
Block I/O
Resource limits using profile - Funtoo Containers example
So I am going to create 3 profiles to mimic the resource limits for current Funtoo Containers.
Price | RAM | CPU Threads | Disk Space | Sign Up |
---|---|---|---|---|
$15/mo | 4GB | 6 CPU Threads | 50GB | Sign Up! (small) |
$30/mo | 12GB | 12 CPU Threads | 100GB | Sign Up! (medium) |
$45/mo | 48GB | 24 CPU Threads | 200GB | Sign Up! (large) |
I am going to create one profile and copy/edit it for the remaining two options.
root # lxc profile create res-small root # lxc profile edit res-small config: limits.cpu: "6" limits.memory: 4GB description: Small Variant of Funtoo Containers devices: root: path: / pool: default size: 50GB type: disk name: small used_by: [] root # lxc profile copy res-small res-medium root # lxc profile copy res-small res-large root # lxc profile set res-medium limits.cpu 12 root # lxc profile set res-medium limits.memory 12GB root # lxc profile device set res-medium root size 100GB root # lxc profile set res-large limits.cpu 24 root # lxc profile set res-large limits.memory 48GB root # lxc profile device set res-large root size 200GB
Now let's create a container and assign the res-small and funtoo profiles to it.
root # lxc init funtoo c-small root # lxc profile assign c-small res-small root # lxc profile add c-small funtoo
Image manipulations
Remote hosts
Running systemd container on a non-systemd host
To use systemd in the container, a recent enough (>=4.6) kernel version with support for cgroup namespaces is needed. Additionally the host needs to have a name=systemd
cgroup hierarchy mounted:
root # mkdir -p /sys/fs/cgroup/systemd root # mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup/systemd
Doing so does not require running systemd
on the host, it only allows to run systemd correctly inside the container(s) .
If you want to get systemd
hierarchy mounted automatically on system startup, using /etc/fstab
will not work, but the
No results can be used for this. First you needed to edit the /etc/cgroup/cgconfig.conf
and add:
/etc/cgroup/cgconfig.conf
mount {
"name=systemd" = /sys/fs/cgroup/systemd;
}
Then you need to start the cgconfig daemon:
root # rc-service cgconfig start
The daemon can be started as needed, or automatically at system start by simply adding it to default group:
root # rc-update add cgconfig default
List of tested and working images
These are images from the https://images.linuxcontainers.org repository available by default in lxd. You can list all available images by typing following command (beware the list is very long):
root # lxc image list images: +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.3 (3 more) | ef69c8dc37f6 | yes | Alpine 3.3 amd64 (20171018_17:50) | x86_64 | 2.00MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.3/armhf (1 more) | 5ce4c80edcf3 | yes | Alpine 3.3 armhf (20170103_17:50) | armv7l | 1.53MB | Jan 3, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.3/i386 (1 more) | cd1700cb7c97 | yes | Alpine 3.3 i386 (20171018_17:50) | i686 | 1.84MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.4 (3 more) | bd4f1ccfabb5 | yes | Alpine 3.4 amd64 (20171018_17:50) | x86_64 | 2.04MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.4/armhf (1 more) | 9fe7c201924c | yes | Alpine 3.4 armhf (20170111_20:27) | armv7l | 1.58MB | Jan 11, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.4/i386 (1 more) | 188a31315773 | yes | Alpine 3.4 i386 (20171018_17:50) | i686 | 1.88MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.5 (3 more) | 63bebc672163 | yes | Alpine 3.5 amd64 (20171018_17:50) | x86_64 | 1.70MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | alpine/3.5/i386 (1 more) | 48045e297515 | yes | Alpine 3.5 i386 (20171018_17:50) | i686 | 1.73MB | Oct 18, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ ... +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | | fd95a7a754a0 | yes | Alpine 3.5 amd64 (20171016_17:50) | x86_64 | 1.70MB | Oct 16, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | | fef66668f5a2 | yes | Debian stretch arm64 (20171016_22:42) | aarch64 | 96.56MB | Oct 16, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | | ff18aa2c11d7 | yes | Opensuse 42.3 amd64 (20171017_00:53) | x86_64 | 58.92MB | Oct 17, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+ | | ff4ef0d824b6 | yes | Ubuntu zesty s390x (20171017_03:49) | s390x | 86.88MB | Oct 17, 2017 at 12:00am (UTC) | +---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
These are the images that are known to work with current LXD setup on Funtoo Linux:
Image | Init | Status |
---|---|---|
CentOS 7 | systemd | Working |
Debian Jessie (8) - EOL April/May 2020 | systemd | Working (systemd - no failed units) |
Debian Stretch (9) - EOL June 2022 | systemd | Working |
Fedora 26 | systemd with cgroup v2 | Not Working |
Fedora 25 | systemd | Working |
Fedora 24 | systemd | Working |
Oracle 7 | systemd | Working (systemd - no failed units) |
OpenSUSE 42.2 | systemd | Working |
OpenSUSE 42.3 | systemd | Working |
Ubuntu Xenial (16.04 LTS) - EOL 2021-04 | systemd | Working |
Ubuntu Zesty (17.04) - EOL 2018-01 | systemd | Working |
Alpine 3.3 | OpenRC | Working |
Alpine 3.4 | OpenRC | Working |
Alpine 3.5 | OpenRC | Working |
Alpine 3.6 | OpenRC | Working |
Alpine Edge | OpenRC | Working |
Archlinux | systemd with cgroup v2 | Not Working |
CentOS 6 | upstart | Working (systemd - no failed units) |
Debian Buster | systemd with cgroup v2 | Not Working |
Debian Sid | systemd with cgroup v2 | Not working |
Debian Wheezy (7) - EOL May 2018 | ? | ? (more testing needed) |
Gentoo | OpenRC | Working (all services started) |
Oracle 6 | upstart | ? (mount outputs nothing) |
Plamo 5 | ? | ? |
Plamo 6 | ? | ? |
Sabayon | systemd with cgroup v2 | Not Working |
Ubuntu Artful (17.10) - EOL 2018-07 | systemd with cgroup v2 | Not Working |
Ubuntu Core 16 | ? | ? |
Ubuntu Trusty (14.04 LTS) - EOL 2019-04 | upstart | Working |