注意:

The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.

Difference between revisions of "LXD/Container Migration"

From Funtoo
< LXD
Jump to navigation Jump to search
 
(3 intermediate revisions by the same user not shown)
Line 48: Line 48:
}}
}}


=Properly remove unnecessary snapshots=
== Properly remove unnecessary snapshots==
Only use if there is a 1:1 map between snapshot and container fs and the image has '''"deleted"''' in it:
 
When creating a container from an image -- and using LXD with ZFS -- LXD will keep a "deleted snapshot" of the container on ZFS, which will contain the contents of the original image -- even if you delete the image in LXD.
 
If you just used the image as a backup -- for example, to copy a snapshot from one LXD system to another -- and then delete the snapshot image once you recreate the container from the image, then these images will still exist, and take up unnecessary space because they store all the original files at the time the image was taken, even if they have since been deleted or changed. You can view all the deleted images as follows:
 
{{console|body=
{{console|body=
###i## zfs list -o name,origin,used community/lxd-containers/containers/foo
###i## zfs list {{!}} grep deleted
community/lxd-containers/deleted/images/05c5a5ace8dab4dbec44223736ba4a135b965db1aaac9d15630cf603938d876a  3.29G  547G    3.29G  legacy
community/lxd-containers/deleted/images/0e43bebf42885b15b089b7516c6afeeb909f9d512558cbf8e54a4282f3520012  5.56G  547G    5.56G  legacy
community/lxd-containers/deleted/images/11dc0a8cbd9e6b240aee8f166ea10d8d7398c14d0436d8ca04dbaf2d89b44a1f  5.47G  547G    5.47G  legacy
community/lxd-containers/deleted/images/192b1692fb676ef478d01011a0f02052ca4bc9b196ba63772f6fd729234a736f    0B  547G    26.2G  legacy
community/lxd-containers/deleted/images/1ba3653a319f152599be5cb28560e4e0858ae382fb90fc6efcc996a83576d4a5  2.70G  547G    2.70G  legacy
community/lxd-containers/deleted/images/1db4d4faa08233cbea536d09dfe447526aba9ed9937d9a1badc60844ee6e0985  4.94G  547G    4.94G  legacy
community/lxd-containers/deleted/images/312382741a1940ff81e3cdeecb75db62730e8d1448f16631fce647cb62fcb1c1  949M  547G      949M  legacy
community/lxd-containers/deleted/images/351e3c131f09d2976de06948e7d7034943fe8a6819acf249466666deaf812d36  65.0G  547G    65.0G  legacy
}}
}}
You can also determine if a container is using one of these deleted images using the following technique:
{{console|body=
{{console|body=
###i## zfs list -o name,origin,used community/lxd-containers/containers/foo
NAME                                    ORIGIN                                                                                                            USED
NAME                                    ORIGIN                                                                                                            USED
community/lxd-containers/containers/foo  community/lxd-containers/deleted/images/192b1692fb676ef478d01011a0f02052ca4bc9b196ba63772f6fd729234a736f@readonly  58.8M
community/lxd-containers/containers/foo  community/lxd-containers/deleted/images/192b1692fb676ef478d01011a0f02052ca4bc9b196ba63772f6fd729234a736f@readonly  58.8M
}}
}}
Use zfs promote:
 
The "origin" shows that most of the data is being stored in the deleted image, with only 58.8MB of new data actually stored on the rootfs filesystem for the container.
 
To fix this, we can make the rootfs the master, so that it no longer depends on the deleted snapshot. You should only do this if the snapshot is in a "/deleted" path
in ZFS indicating that it no longer exists in LXD, and only if the only container using the snapshot is a single container -- in this case, the container named "foo".
To do this, use zfs promote:
 
{{console|body=
{{console|body=
###i## zfs promote community/lxd-containers/containers/foo
###i## zfs promote community/lxd-containers/containers/foo
}}
}}


Check again:
Now, you will see that the rootfs for the container contains all the data from the deleted image, and the deleted image no longer exists in ZFS:
 
{{console|body=
{{console|body=
###i## zfs list -o name,origin,used community/lxd-containers/containers/foo
###i## zfs list -o name,origin,used community/lxd-containers/containers/foo
Line 70: Line 94:
community/lxd-containers/containers/foo    -      26.2G
community/lxd-containers/containers/foo    -      26.2G
}}
}}
This ensures that space isn't wasted on your ZFS pool -- storing stale data that was in the original image but is no longer in existence in the running container.

Latest revision as of 06:13, October 29, 2022

The idea is – take a snapshot of the container before starting an upgrade. But to get databases in a sane state, I do:

  1. stop the container
  2. take a snapshot
  3. optional: start the container again so it is "up" (you can't do this most containers, ones you expect to change, like bugs, code, forums, maybe www. But for read-only services, it is a trick to keep the container "up" during a migration)
  4. create an image from the snapshot using "lxd publish --compression pigz containername/2022-10-17 --alias containnername"
  5. with two lxd hosts linked, you can then launch the image on the other container – before it starts, stop the original running container from step #3 if you did this to avoid IP conflict.
  6. now ensure the new container is working as expected.
  7. delete old container on original host (will also delete snapshots), or you can delete new container and try again if there is a problem.

This might seem like a pretty complicated method but I have found it to be good for lxd for some technical reasons:

  1. One, is that the snapshot works around an issue: "lxc move/copy" expects the EXACT profiles to exist on both lxd hosts, but our profile names are often different. lxd will often get "stuck" and "lxc move/copy" will not work for this reason. By creating an image, we free ourselves from this problem. We will re-apply the correct profiles for the new host using -p option ourselves, and make any other necessary config changes if required.
  2. Second, pigz compression (could also use lzo) dramatically speeds up creation of the container image (gz single-thread is default)
  3. Third, we have a snapshot if there is a problem, and we can even go back to the original container if there is a major problem with our snapshot (since it is just stopped on orighost)

Here are relevant commands:

Origin host

root # lxc stop foo
root # lxc snapshot foo 2022-10-26
root # lxc start foo
root # lxc publish foo/2022-10-26 --compression pigz --alias foo # <-- image will also be named "foo"

On second lxd server "linked" to first

root # lxc launch orighost:foo foo -p default -p local_correct_profile1 -p local_correct_profile2

On first lxd server before this command actually starts container (this image creation from remote can take some time...)

Origin host

root # lxc stop foo

if new container is OK:

root # lxc delete foo

On second lxd server "linked" to first

if new container is OK:

root # lxc image delete foo

Properly remove unnecessary snapshots

When creating a container from an image -- and using LXD with ZFS -- LXD will keep a "deleted snapshot" of the container on ZFS, which will contain the contents of the original image -- even if you delete the image in LXD.

If you just used the image as a backup -- for example, to copy a snapshot from one LXD system to another -- and then delete the snapshot image once you recreate the container from the image, then these images will still exist, and take up unnecessary space because they store all the original files at the time the image was taken, even if they have since been deleted or changed. You can view all the deleted images as follows:

root # zfs list | grep deleted
community/lxd-containers/deleted/images/05c5a5ace8dab4dbec44223736ba4a135b965db1aaac9d15630cf603938d876a  3.29G   547G     3.29G  legacy
community/lxd-containers/deleted/images/0e43bebf42885b15b089b7516c6afeeb909f9d512558cbf8e54a4282f3520012  5.56G   547G     5.56G  legacy
community/lxd-containers/deleted/images/11dc0a8cbd9e6b240aee8f166ea10d8d7398c14d0436d8ca04dbaf2d89b44a1f  5.47G   547G     5.47G  legacy
community/lxd-containers/deleted/images/192b1692fb676ef478d01011a0f02052ca4bc9b196ba63772f6fd729234a736f     0B   547G     26.2G  legacy
community/lxd-containers/deleted/images/1ba3653a319f152599be5cb28560e4e0858ae382fb90fc6efcc996a83576d4a5  2.70G   547G     2.70G  legacy
community/lxd-containers/deleted/images/1db4d4faa08233cbea536d09dfe447526aba9ed9937d9a1badc60844ee6e0985  4.94G   547G     4.94G  legacy
community/lxd-containers/deleted/images/312382741a1940ff81e3cdeecb75db62730e8d1448f16631fce647cb62fcb1c1   949M   547G      949M  legacy
community/lxd-containers/deleted/images/351e3c131f09d2976de06948e7d7034943fe8a6819acf249466666deaf812d36  65.0G   547G     65.0G  legacy

You can also determine if a container is using one of these deleted images using the following technique:

root # zfs list -o name,origin,used community/lxd-containers/containers/foo

NAME                                     ORIGIN                                                                                                             USED
community/lxd-containers/containers/foo  community/lxd-containers/deleted/images/192b1692fb676ef478d01011a0f02052ca4bc9b196ba63772f6fd729234a736f@readonly  58.8M

The "origin" shows that most of the data is being stored in the deleted image, with only 58.8MB of new data actually stored on the rootfs filesystem for the container.

To fix this, we can make the rootfs the master, so that it no longer depends on the deleted snapshot. You should only do this if the snapshot is in a "/deleted" path in ZFS indicating that it no longer exists in LXD, and only if the only container using the snapshot is a single container -- in this case, the container named "foo". To do this, use zfs promote:

root # zfs promote community/lxd-containers/containers/foo

Now, you will see that the rootfs for the container contains all the data from the deleted image, and the deleted image no longer exists in ZFS:

root # zfs list -o name,origin,used community/lxd-containers/containers/foo
NAME                                        ORIGIN   USED
community/lxd-containers/containers/foo     -       26.2G

This ensures that space isn't wasted on your ZFS pool -- storing stale data that was in the original image but is no longer in existence in the running container.