Running LXC on OpenWrt Host
In principal, an OpenWrt host can run any compatible guest distro via LXC. In practice, the architecture of the guest OS must match that of the OpenWrt host, therefore only a few architectures are supported including:
OpenWrt host arch | LXC arch |
---|---|
aarch64 | arm64 |
x86_64 | amd64 |
tbd | armel |
tbd | armhf |
tbd | i386 |
tbd | ppc64el |
tbd | s390x |
Example
This example was using OpenWrt ARM64 MVEBU ESPRESSOBIN et ESPRESSOBIN ULTRA. It has also been verified on a OpenWrt aarch64 Raspberry Pi4 B.
Setup on the OpenWrt host
Install some necessary tools and prerequisites:
opkg install xz tar gnupg
Install the needed kernel modules:
opkg install kmod-ikconfig kmod-veth
Install the core lxc packages:
opkg install lxc-start lxc-stop lxc-create lxc-attach lxc-destroy lxc-config lxc-ls getopt
: Note that getopt should be a package dependency, see: #16684 is fixed.
Additional packages exist that can add functionality but that aren't strictly required. Find them with:
opkg list | grep lxc
: LXC containers should now utilize cgroupv2.
Optionally check the kernel config to see if anything required is missing:
root@ultra:~# opkg install lxc-checkconfig # lxc-checkconfig LXC version 4.0.5 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled --- Control groups --- Cgroups: enabled Cgroup v1 mount points: /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpu /sys/fs/cgroup/cpuacct /sys/fs/cgroup/blkio /sys/fs/cgroup/memory /sys/fs/cgroup/pids /sys/fs/cgroup/rdma /sys/fs/cgroup/systemd Cgroup v2 mount points: Cgroup v1 freezer controller: missing Cgroup v1 clone_children flag: enabled Cgroup device: missing Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled, loaded Macvlan: enabled, not loaded Vlan: enabled, not loaded Bridges: enabled, not loaded Advanced netfilter: enabled, not loaded CONFIG_NF_NAT_IPV4: missing CONFIG_NF_NAT_IPV6: missing CONFIG_IP_NF_TARGET_MASQUERADE: missing CONFIG_IP6_NF_TARGET_MASQUERADE: missing CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, not loaded CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, loaded FUSE (for use with lxcfs): enabled, not loaded --- Checkpoint/Restore --- checkpoint restore: missing CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: missing CONFIG_INET_DIAG: missing CONFIG_PACKET_DIAG: missing CONFIG_NETLINK_DIAG: enabled File capabilities: Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
In order to download distro images for the guest, we need to either:
- Use a keyserver on the host which requires additional setup, or
- Disable validation (not recommended)
To verify signature of the images, we need to install some additional packages which can be remove after the guest is setup:
opkg install gnupg2-utils gnupg2-dirmngr
Alternatively, just use the --no-validate switch in the command when setting up the container. This is potentially dangerous and insecure.
Example:
root@ultra:~# lxc-create --name myLMS --template download -- --no-validate
Create a LXC container
There are many different distros available for installation. Search for your favorite distro from the supported ones with this command:
lxc-create --name myLMS --template download -- --list --no-validate
This guide will use Debian Buster selected by pre-specifying the distro, release, and architecture via switches, but it is possible to simply omit these three and select them interactively as well.
root@ultra:~# lxc-create --name myLMS --template download -- --dist debian --release buster --arch arm64 Setting up the GPG keyring ERROR: Unable to fetch GPG key from keyserver lxc-create: myLMS: lxccontainer.c: create_run_template: 1616 Failed to create container from template lxc-create: myLMS: tools/lxc_create.c: main: 319 Failed to create container myLMS
Container management
To list the installed containers and query their status, use lxc-ls:
root@ultra:~# lxc-ls -f NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED myLMS STOPPED 0 - - - false
Start and stop containers with lxc-start and lxc-stop respectively:
root@ultra:~# lxc-start -n myLMS root@ultra:~# lxc-ls -f NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED myLMS RUNNING 0 - - - false
root@ultra:~# lxc-stop -n myLMS root@ultra:~# lxc-ls -f NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED myLMS STOPPED 0 - - - false
Setup networking in the container:
root@ultra:~# nano /srv/lxc/myLMS/config ... # Network configuration #lxc.net.0.type = empty lxc.net.0.type = veth lxc.net.0.link = br-lan lxc.net.0.flags = up lxc.net.0.hwaddr = 00:FF:DD:BB:CC:01
Optionally mount a share from the OpenWrt host inside the guest Make sure to create the path to the share in the container, then edit the container config adding the following line:
lxc.mount.entry = /mnt/SHARE /srv/lxc/myLMS/rootfs/mnt/SHARE none bind,create=d
Setup the containerized guest distro
Attach to the guest which will drop you in as root user. Example below is enabling ssh and fixing sudo within Debian.
root@ultra:~# lxc-attach -n myLMS root@myLMS:~# $ adduser admin $ apt install sudo $ addgroup admin sudo $ apt install ssh -y $ ip a root@myLMS:~# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:ff:dd:bb:cc:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.1.188/24 brd 192.168.1.255 scope global dynamic eth0 valid_lft 42908sec preferred_lft 42908sec inet6 fdc5:f7f:d0b5:0:2ff:ddff:febb:cc01/64 scope global dynamic mngtmpaddr valid_lft forever preferred_lft forever inet6 fe80::2ff:ddff:febb:cc01/64 scope link valid_lft forever preferred_lft forever $ exit
Auto start the container on OpenWrt host
opkg install lxc-auto lxc-autostart uci show lxc-auto uci add lxc-auto container uci set lxc-auto.@container[-1].name=myLMS uci set lxc-auto.@container[-1].timeout=30 uci show lxc-auto uci commit lxc-auto