How to convert a LXC container to a LXD container.
Until recently, we were still running a number of legacy LXC containers, which for years performed important mission-critical tasks very stably and with no significant problems. However, we decided to convert them to LXD containers to create a more homogeneous server environment and to simplify management. Switching to LXD also means more functionality and the ability to take snapshots by upgrading from a directory-based storage pool to a zfs storage pool.
Although we could have used a tool “lxc-to-lxd” that should have made the conversion from LXC containers to LXD containers easy, we ran into quite a few unexplained problems that ultimately caused the process to fail numerous times. Therefore, we started looking for a way to perform the conversion process manually so that we have full control and it is also possible to migrate the source container to a new (remote) LXD host server.
In this example, we are going to convert container “ct1” on an LXC host server and migrate the container to a new LXD host server.
On the LXC host server login to the “ct1” container.
# lxc-attach -n ct1
Although it’s not strictly necessary, we want to check the distribution and version of the operating system the container is running on. In our case, we know it’s an Ubuntu server, so we just need the “lsb_release -a” command to get the information.
# lsb_release -a
No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic
As you can see it is a Ubuntu 18.04 release.
For other distributions which do not support “lsb_release” command we can use another option.
# cat /etc/*-release
As an example here a printout for an Alpine 3.16.0 container.
3.16.0 NAME="Alpine Linux" ID=alpine VERSION_ID=3.16.0 PRETTY_NAME="Alpine Linux v3.16" HOME_URL="https://alpinelinux.org/" BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
Write down the relevant information that we will need later to create the metadata for the container created on the new LXD host.
Now stop the container.
# lxc-stop -n ct1
The containers on our old LXC host server are based on a directory-based storage pool and as such the “rootfs” directory can be found under “/var/lib/lxc/<container>/”.
Go to the directory of our “ct1” container where the relevant “rootfs” directory is located.
# cd /var/lib/lxc/ct1/
Note that LXD runs unprivileged containers by default. So it is important to determine whether your source container is privileged or unprivileged by checking the ownership of the “rootfs”.
# ls -al
total 16 drwxrwx--- 3 root root 4096 Aug 5 19:12 . drwx--x--x 11 root root 4096 Dec 7 2021 .. -rw-r--r-- 1 root root 899 Nov 7 2018 config drwxr-xr-x 21 root root 4096 Jul 26 16:31 rootfs
If it is owned by root:root instead of 100000:100000, then the source container is privileged and the new LXD container should be created as privileged by adding “-c security.privileged=true”. Note that our “ct1” is privileged, which is important to remember when we create our new LXD container later.
For a smooth transfer to the new LXD server, the “rootfs” must be compressed and packed into a tar file. To maintain numeric ownership of the “rootfs” directory, it is necessary to add the “-numeric-owner” option to the tar command.
# tar --numeric-owner -czvf rootfs_ct1.tgz rootfs
After creating the tar file we can restart the container.
# lxc-start -n ct1 -d
Use scp to copy the tar file to the new LXD host server.
# scp -P 4422 rootfs_ct1.tgz user@xxx.xxx.xxx.xxx:/home/user/
Replace “xxx.xxx.xxx.xxx” with the IP address of your new LXD server and “user” with an existing user on this host. You can omit the “-P” option if you are running ssh on the default 22 port. Obviously you need to set the port number to the relevant port of your ssh server on the remote host.
We don’t need the tar file on the old server anymore so we can delete it.
# rm rootfs_ct1.tgz
Now let’s move over to our new LXD host server.
First we need to create a new “empty” container (in this example ct1) Since the source LXC container is privileged we also need to create the LXD container as privileged adding ” -c security.privileged=true” to the “lxc init” command.
# lxc init ct1 --empty -c security.privileged=true
Verify your new container has been created.
# lxc list
Our LXD host server was setup with snap package using a zfs storage pool and as such a new zfs dataset was created for our new LXD container “ct1”.
In order to mount the the zfs dataset LXD created for the new container we need to know the name and where it can be mounted (zfs mountpoint).
# zfs list | grep ct1
zfs-lxd/containers/ct1 25.5K 12.4G 25.5K legacy
Create a directory to mount the zfs dataset of the “ct1” container
# mkdir /mnt/zfs
As of version 4.24 LXD started using “legacy” mounts to avoid some zfs issues. If the mountpoint of your container is not set to “legacy” we need to change this first before being able to mount the dataset.
# zfs set mountpoint=legacy zfs-lxd/containers/ct1
We can now mount the dataset.
# mount -t zfs zfs-lxd/containers/ct1 /mnt/zfs/
Change directory.
# cd /mnt/zfs
Check the content.
# ls -al /mnt/zfs/
total 7 d--x------ 2 root root 3 Aug 3 14:08 . drwxr-xr-x 7 root root 4096 Aug 3 14:10 .. -r-------- 1 root root 1507 Aug 3 14:08 backup.yaml
The directory should be practically empty with only a “backup.yaml” file.
Move the tar file of the “rootfs” of our source container we just transferred from the LXC host server to the mounted zfs dataset.
# mv /home/user/rootfs_ct1.tgz ./
And unpack it.
# tar --numeric-owner -xzvf rootfs_ct1.tgz
When done delete the tar file.
# rm rootfs_ct1.tgz
Since networking in particular assigning a fixed ip address is done differently in LXD we want to make sure the configuration is set to dhcp. On Ubuntu 16.04 and prior network configuration is set in the “interfaces” file.
# vi rootfs/etc/network/interfaces
And change “manual” to “dhcp” if not set already.
# The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp
Or in case of netplan.
# vi rootfs/etc/netplan/50-cloud-init.yaml
Please note that in our case the file to edit is “50-cloud-init.yaml” but could be named differently depending on your systems so check the “rootfs/etc/netplan/” directory to be sure.
And change to “dhcp” if not set already.
network: version: 2 ethernets: eth0: dhcp4: true dhcp6: true
Return to the home directory.
# cd
Umount the dataset.
# umount /mnt/zfs
Start the container.
# lxc start ct1
Grab a shell of the container and check the status.
# lxc shell ct1
Exit the container.
ct1 # exit
Stop the container.
# lxc stop ct1
We want to assign a static ip to our container so we can create rules for iptables to route traffic from the host.
# lxc config device override ct1 eth0
# lxc config device set ct1 eth0 ipv4.address 10.0.30.10
For obvious security reasons we also want the container to run in unprivileged mode.
# lxc config set ct1 security.privileged false
And finally change/update the metadata.
# lxc config set ct1 image.release=bionic
# lxc config set ct1 image.version=18.04
# lxc config set ct1 image.description="Ubuntu 18.04"
Start the container again.
# lxc start ct1
If all went well, we now have a functioning LXD container and it’s good practice to take a first snapshot.
# lxc snapshot ct1