This is an old revision of the document!
OpenWrt as Docker container host
OpenWrt can be a Docker host on x86-64, Aarch64, and other supported architectures.
There are two ways to use Docker as a host, install Docker Community Edition, or use native OpenWrt tools that support Docker container specification.
You will probably need to set up storage first as a place to store the containers and data.
Also in most cases you will be running the container as a specific user and will give it access to some folder outside the container, where it can store its configuration and the data. So you will probably need to create new users and groups for applications or system services, create the folders for the configuration and data, and then change the owner of these folders to the user you will run the container as.
Install Docker Community Edition
- Install docker-ce package for command line tools
- Install luci-app-dockerman package to get a control panel for containers in Luci
The default folder for docker in the dockerman luci interface is /opt/docker so you want to mount your storage at /opt or change the folder in Docker > Overview > Docker Root Dir and then restart the dockerd service.
Add an image
To add an image, search it on Docker Hub, then copy the image name from the docker pull text box. For example, if the text is docker pull linuxserver/transmission, then copy linuxserver/transmission.
In Luci go to Docker > Images and paste that text in the Pull Image box, then click Pull. The page will show the download progress.
For longer container pulls Luci might timeout so you will need to use the command line. For example, unifi-controller images include java runtime environment and approach 500MB, so you could SSH in and enter: docker pull linuxserver/unifi-controller.
Then in Luci go to Docker > Containers > Add. In the new container page, select the docker image from the Docker Image menu, and then set all other parameters (usually the available/useful parameters are described in the description of the container on Docker Hub), then press Submit to create the container.
Configure The Docker CE Engine Daemon
Config is located in /etc/config/dockerd.
data_roota folder where to store images and containers. It's also mounted by a docker. You may want to change it to a USB disk. It's file system can't be fat or ntfs. By default/opt/docker/log_levelDefaultwarn.hostsan API listener. By default is used a UNIX socket/var/run/docker.sock.iptablesEnable iptables rules. Default1bipnetwork bridge IP. Default172.18.0.1/24fixed_cidrAllocate IPs from a range. Default172.17.0.0/16fixed_cidr_v6same as fixed_cidr for IPv6. Default 'fc00:1::/80'ipv6Enable IPv6 networking. Default1ipDefault::ffff:0.0.0.0dnsDNS Servers. Default172.17.0.1registry_mirrorsURL of a registries. Defaulthttps://hub.docker.com
The following settings require a restart of docker to take full effect, A reload will only have partial or no effect:
- bip
- blocked_interfaces
- extra_iptables_args
- device
Use native OpenWrt tools
Procd init system now supports Open Container Initiative Runtime Specification, extending its slim containers ('ujail') capability.
The uxc command line tool handles the basic operations on containers as defined by the spec.
This allows to use it as a drop-in replacement for Docker's 'runc' (or 'crun') on OpenWrt hosts with a significantly reduced footprint.
Detailed but possibly outdated info available on https://gitlab.com/prpl-foundation/prplos/prplos/-/wikis/uxc
install packages
For 20.0x install the following:
opkg install kmod-veth uxc ujail-console
For newer snapshots:
opkg install kmod-veth uxc procd-ujail procd-ujail-console
create veth pair for container
uci batch <<EOF set network.veth0=device set network.veth0.type='veth' set network.veth0.name='vhost0' set network.veth0.peer_name='virt0' add_list network.lan.ifname='vhost0' set network.virt0=interface set network.virt0.ifname='virt0' set network.virt0.proto='none' # set proto='none' assuming DHCP client inside container # use 'static' otherwise and also set ipaddr, gateway and dns set network.virt0.jail='container1' set network.virt0.jail_ifname='host0' commit network EOF
creating an OCI run-time bundle
To create an OCI run-time bundle, which is needed for uxc, follow these steps.
First build a container image.
docker build -t container1 .
Note the image ID that is printed at the end, and use it after the @ in the next command.
skopeo copy containers-storage:[overlay@$HOME/.local/share/containers/storage+/run/user/1000/containers]@b0897a4ee285938413663f4c7b2b06d21e45c4358cebb04093ac9de9de118bf2 oci:container1:latest sudo umoci unpack --image container1 container1-bundle sudo rsync -aH container1-bundle root@192.168.0.1:/mnt/sda3/debian
This is quite cumbersome. If someone knows a better way, please do update this page.
import a OCI runtime container
(assuming OCI run-time bundle with config.json in /mnt/sda3/debian)
uxc create container1 /mnt/sda3/debian true uxc start container1 uxc list uxc state container
If the container uses a stdio console, you can attach it using
ujail-console -c container1
(there is no buffer, so if you like to see the complete bootlog of a container, make sure to attach a console after the 'create' call but before starting it)