OpenWrt as QEMU/KVM host server
Introduction
It's possible to use OpenWrt as a QEMU host and run guests on it. If you want to run OpenWrt as a QEMU guest itself, see OpenWrt in QEMU.
OpenWrt provides QEMU packages for ARM and x86 platforms. This article focuses on the x86 target, the networking is done via qemu-bridge-helper.
Installing QEMU
You need the following packages on your device: kmod-tun, qemu-bridge-helper. Depending on the guest architecture, install qemu-x86_64-softmmu or qemu-arm-softmmu. If your hardware supports it, also install kmod-kvm-amd or kmod-kvm-intel for better performance.
Example for an Intel system and a x86_64 guest:
opkg install kmod-tun qemu-bridge-helper qemu-x86_64-softmmu kmod-kvm-intel
After the first installation, reboot your device.
Running a guest
For the guest OS, use a distribution that comes with virtio drivers by default (Debian or Fedora for example).
Installing a guest OS
If you don't have a prepared disk image, you can install a guest OS directly on your OpenWrt device. There are several guides available on how to install a Linux distribution on a QEMU image: Debian for example. A quick way is to install Debian using the kernel and initrd of a Debian netboot installer.
Create a disk image
qemu-img create -f qcow2 debian.img 4G
More details on disk images: https://en.wikibooks.org/wiki/QEMU/Images
Download installer files
wget https://ftp.debian.org/debian/dists/stable/main/installer-amd64/current/images/netboot/debian-installer/amd64/linux wget https://ftp.debian.org/debian/dists/stable/main/installer-amd64/current/images/netboot/debian-installer/amd64/initrd.gz
Both files can be safely removed after finishing the installation.
Run the installer
Edit the init script, add the options display, vnc and cdrom to the qemu command as mentioned in the comments in the script. Start the service as normal (/etc/init.d/kvm-pihole start). Connect over VNC to see the console and proceed with the installation. When finished, if the VM does not shut itself down, stop it with /etc/init.d/kvm-pihole stop.
Follow the installation instructions and install GRUB on /dev/vda. It's also useful to install sshd (enabled by default).
Run the new guest
After finishing the installation, you can remove those special options (display, vnc and cdrom). You can also delete the ISO if you don't need it anymore.
Start the VM again.
You should now be able to reach the VM via SSH from within br-lan.
If you want to control the VM using the command line, you have to enable a serial console. To do this, edit the GRUB entry during boot and add console=ttyS0 to the kernel command line. After the VM finished booting, edit /etc/default/grub and add console=ttyS0 to GRUB_CMDLINE_LINUX_DEFAULT as well. After that run update-grub. To connect to the console, you can use:
socat STDIO,cfmakeraw,escape=0x1d UNIX:<serial-socket-file>.
Init script
Here is an example init script you can use. This is an example for PiHole, rename the script and the UNITNAME variable if needed. It connects to QMP to cleanly shutdown the vm when you stop it.
Be careful with copying the heredoc part. This code block is left unindented on purpose to avoid problems with whitespaces. If you want, you can indent the lines with tabs and use <<-QMP instead of <<QMP, but since tabs are printed as whitespaces in dokuwiki code blocks, this example is used without indentation. You can read more about here documents in the advanced bash-scripting guide
- /etc/init.d/kvm-pihole
#!/bin/sh /etc/rc.common USE_PROCD=1 START=99 STOP=1 UNITNAME="kvm-pihole" PIDFILE="/var/run/$UNITNAME.pid" CPUS="1" MEM="500M" DISKIMAGE="/storage/vms/$UNITNAME.img" DISKFORMAT="qcow2" NETBRIDGE="br-lan" # generate a random locally administered mac address, x2:xx:xx:xx:xx:xx, x6:xx:xx:xx:xx:xx, xA:xx:xx:xx:xx:xx, xE:xx:xx:xx:xx:xx NETMAC="xx:xx:xx:xx:xx:xx" QMPSOCKET="/var/run/$UNITNAME-qmp.sock" SEXPECTSOCKET="/var/run/$UNITNAME-sexpect.sock" SERIALSOCKET="/var/run/$UNITNAME-serial.sock" # Note: access vm serial with 'socat STDIO,cfmakeraw,escape=0x1d UNIX:<serial-socket-file>' # to install the first time, add these options: #-vnc :0 \ #-cdrom /storage/vms/ubuntu-24.04.1-live-server-amd64.iso \ start_service() { procd_open_instance procd_set_param command qemu-system-x86_64 \ -enable-kvm \ -display none \ -cpu host \ -machine type=q35,accel=kvm \ -smp "$CPUS" \ -m "$MEM" \ -boot c \ -drive file="$DISKIMAGE",cache=none,if=virtio,format="$DISKFORMAT" \ -netdev bridge,br="$NETBRIDGE",id=lan \ -device virtio-net-pci,mac="$NETMAC",netdev=lan \ -object rng-random,id=rng0,filename=/dev/urandom \ -device virtio-rng-pci,rng=rng0 \ -qmp unix:$QMPSOCKET,server,nowait \ -serial unix:$SERIALSOCKET,server,nowait #procd_set_param respawn ${respawn_threshold:-3600} ${respawn_timeout:-5} ${respawn_retry:-5} #procd_set_param netdev dev procd_set_param stdout 0 procd_set_param stderr 0 procd_set_param user root procd_set_param pidfile $PIDFILE procd_set_param term_timeout 60 procd_close_instance } stop_service() { # try to gracefully shut down VM before procd kills qemu cmd="sexpect -sock $SEXPECTSOCKET" $cmd spawn -timeout 5 -autowait socat - unix-connect:$QMPSOCKET $cmd expect 'QMP' $cmd send -enter '{ "execute": "qmp_capabilities" }' $cmd expect 'return' $cmd send -enter '{ "execute": "system_powerdown" }' $cmd expect 'SHUTDOWN' $cmd wait }
Test the the script by running /etc/init.d/kvm-pihole start and look for errors in /var/log/qemu.log.
If the script works as desired, enable it for every boot: /etc/init.d/kvm-pihole enable