Note

The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.

LXD/Container Migration

From Funtoo
< LXD
Revision as of 06:01, October 29, 2022 by Drobbins (talk | contribs)
Jump to navigation Jump to search

The idea is – take a snapshot of the container before starting an upgrade. But to get databases in a sane state, I do:

  1. stop the container
  2. take a snapshot
  3. optional: start the container again so it is "up" (you can't do this most containers, ones you expect to change, like bugs, code, forums, maybe www. But for read-only services, it is a trick to keep the container "up" during a migration)
  4. create an image from the snapshot using "lxd publish --compression pigz containername/2022-10-17 --alias containnername"
  5. with two lxd hosts linked, you can then launch the image on the other container – before it starts, stop the original running container from step #3 if you did this to avoid IP conflict.
  6. now ensure the new container is working as expected.
  7. delete old container on original host (will also delete snapshots), or you can delete new container and try again if there is a problem.

This might seem like a pretty complicated method but I have found it to be good for lxd for some technical reasons:

  1. One, is that the snapshot works around an issue: "lxc move/copy" expects the EXACT profiles to exist on both lxd hosts, but our profile names are often different. lxd will often get "stuck" and "lxc move/copy" will not work for this reason. By creating an image, we free ourselves from this problem. We will re-apply the correct profiles for the new host using -p option ourselves, and make any other necessary config changes if required.
  2. Second, pigz compression (could also use lzo) dramatically speeds up creation of the container image (gz single-thread is default)
  3. Third, we have a snapshot if there is a problem, and we can even go back to the original container if there is a major problem with our snapshot (since it is just stopped on orighost)

Here are relevant commands:

Origin host

root # lxc stop foo
root # lxc snapshot foo 2022-10-26
root # lxc start foo
root # lxc publish foo/2022-10-26 --compression pigz --alias foo # <-- image will also be named "foo"

On second lxd server "linked" to first

root # lxc launch orighost:foo foo -p default -p local_correct_profile1 -p local_correct_profile2

On first lxd server before this command actually starts container (this image creation from remote can take some time...)

Origin host

root # lxc stop foo

if new container is OK:

root # lxc delete foo

On second lxd server "linked" to first

if new container is OK:

root # lxc image delete foo

Properly remove unnecessary snapshots

Only use if there is a 1:1 map between snapshot and container fs and the image has "deleted" in it:

root # zfs list -o name,origin,used community/lxd-containers/containers/foo
NAME                                     ORIGIN                                                                                                             USED
community/lxd-containers/containers/foo  community/lxd-containers/deleted/images/192b1692fb676ef478d01011a0f02052ca4bc9b196ba63772f6fd729234a736f@readonly  58.8M

Use zfs promote:

root # zfs promote community/lxd-containers/containers/foo

Check again:

root # zfs list -o name,origin,used community/lxd-containers/containers/foo
NAME                                        ORIGIN   USED
community/lxd-containers/containers/foo     -       26.2G