QEMU is an an open source processor emulator (and virtualizer). This document descripes howto run OpenWrt in QEMU.
If you are lookig to use OpenWrt as a QEMU host, see Running QEMU guests on OpenWrt.
It is mixed descriptions from windows and linux, so please read through all of it before starting.
Choosing different emulation settings can affect performance greatly.
Example: 30s iperf-s@openwrt(qemu running on host) to host
ne2k_pci:0.0-31.3 sec 14.6 MBytes 3.92 Mbits/sec pcnet: 0.0-30.0 sec 2.38 GBytes 682 Mbits/sec e1000: 0.0-30.0 sec 6.23 GBytes 1.79 Gbits/sec vmxnet3: 0.0-30.0 sec 8.67 GBytes 2.48 Gbits/sec virtio-net-pci: 0.0-30.0 sec 44.6 GBytes 12.8 Gbits/sec
Qemu runs on many different systems.
Many Linux Distributions like Debian, Ubuntu, Suse, Fedora provide a qemu package in their package repositories.
Example for Debian 7 (Wheezy):
sudo apt-get install qemu
Qemu is rapidly developing so features, syntax might change between versions.
The QEMU Wiki Links page provides you with several unoffical download links of Windows builds.
The current platform meant for use with QEMU for emulating an ARM system is armvirt. The platform is available in the downloads.
This is the simplest method that can be used to test an image. However, it runs entirely in RAM: any modification made is lost upon reboot.
To use this boot method, here with 64 MB of RAM, run:
qemu-system-arm -nographic -M virt -m 64 -kernel openwrt-armvirt-zImage-initramfs
Use QEMU >= 2.2 (earlier versions can have bugs with MIPS16) ticket 16881 - Ubuntu 14.03.x LTS uses qemu 2.0 which is has this bug.
The “malta” platform is meant for use with QEMU for emulating a MIPS system.
malta target supports both big and little-endian variants, pick the matching files and qemu version (
qemu-system-mipsel -kernel openwrt-malta-le-vmlinux-initramfs.elf -nographic -m 256
In recent enough versions one can enable ext4 root filesystem image building, and since r46269 ( only in trunk, it's not part of the 15.05 CC release) it's possible to boot straight from that image (without an initramfs):
qemu-system-mipsel -M malta \ -hda openwrt-malta-le-root.ext4 \ -kernel openwrt-malta-le-vmlinux.elf \ -nographic -append "root=/dev/sda console=ttyS0"
Use the build documentation found on the HiFive Unleashed page. The process described there will generate the bbl.qemu (BBL+vmlinux) image required to boot with QEMU. For reference, use https://git.openwrt.org/?p=openwrt/staging/wigyori.git;a=shortlog;h=refs/heads/riscv-201810
Until 4.19 support is merged into openwrt/trunk, the port itself cannot be merged into trunk, and manual builds are required.
RISC-V support is in mainline qemu, refer to https://wiki.qemu.org/Documentation/Platforms/RISCV
The suggested QEMU startup is:
$ qemu-system-riscv64 -nographic -machine virt \ -kernel bbl.qemu -append "root=/dev/vda2 ro console=ttyS0" \ -drive file=sdcard.img,format=raw,id=hd0 \ -device virtio-blk-device,drive=hd0 \ -device virtio-net-device,netdev=net0 -netdev user,id=net0 \ -smp 2
The x86-64 target has support for ESXI images by default. Booting the VMDK / VDI images might not work with newer qemu versions.
IMG/VDI/VMDK with “-hda” switch do not work with qemu 2.x .
pc-q35-2.0 / q35 emulates a different machine. With new syntax (no -hda , -net) the IMG / VDI / VMDK works here.
Some emulated network cards might have performance issues.
qemu-system-x86_64 \ -enable-kvm \ -M pc-q35-2.0 \ -drive file=openwrt-x86_64-combined-ext4.vdi,id=d0,if=none \ -device ide-hd,drive=d0,bus=ide.0 \ -drive file=data.qcow2,id=d1,if=none \ -device ide-hd,drive=d1,bus=ide.1 \ -soundhw ac97 \ -netdev bridge,br=virbr0,id=hn0 \ -device e1000,netdev=hn0,id=nic1 \ -netdev user,id=hn1 \ -device e1000,netdev=hn1,id=nic2 qemu-system-x86_64 -M q35 -drive file=openwrt-x86_64-combined-ext4.img,id=d0,if=none,bus=0,unit=0 -device ide-hd,drive=d0,bus=ide.0
qemu has several options to provide network connectivity to emulated images, see all
-net options in qemu(1).
The default networking mode for qemu is “user mode network stack”.
In this mode,
qemu acts as a proxy for outbound TCP/UDP connections. It also provides DHCP and DNS service to the emulated system.
To provide Internet access to the emulated OpenWrt system, use (the example uses an armvirt system, adjust for your setup):
qemu-system-arm -net nic,vlan=0 -net nic,vlan=1 -net user,vlan=1 \ -nographic -M virt -m 64 -kernel lede-17.01.0-r3205-59508e3-armvirt-zImage-initramfs
Here, we setup two network cards inside the emulated OpenWrt system:
eth0, used as LAN in OpenWrt (not connected to anything here)
eth1, used as WAN in OpenWrt, and connected to qemu that will proxy all TCP/UDP connections towards the Internet
The OpenWrt system should get both an IPv4 and an IPv6 on
eth1 (via DHCP/DHCPv6). The ranges will be 10.0.2.0/24 and fec0::/64 (qemu defaults, see qemu(1) to configure other ranges).
LUCI is the web UI used by OpenWrt. If you want to check how LUCI works or to poke around with LUCI-apps this setup is for you. (the example uses an armvirt system, adjust for your setup)
Note: This setup requires some privileges (
CAP_MKNOD under Linux) so it's easier to run it under
Save the script and edit
IMAGE variable to reflect your OpenWrt version, then run it under
#!/bin/sh IMAGE=lede-17.01.0-r3205-59508e3-armvirt-zImage-initramfs LAN=ledetap0 # create tap interface which will be connected to OpenWrt LAN NIC ip tuntap add mode tap $LAN ip link set dev $LAN up # configure interface with static ip to avoid overlapping routes ip addr add 192.168.1.101/24 dev $LAN qemu-system-arm \ -device virtio-net-pci,netdev=lan \ -netdev tap,id=lan,ifname=$LAN,script=no,downscript=no \ -device virtio-net-pci,netdev=wan \ -netdev user,id=wan \ -M virt -nographic -m 64 -kernel $IMAGE # cleanup. delete tap interface created earlier ip addr flush dev $LAN ip link set dev $LAN down ip tuntap del mode tap dev $LAN
How networking works:
eth0, used as LAN in OpenWrt, and connected to
ledetap0in host system(static address
192.168.1.101/24), providing access to LUCI at
eth1, used as WAN in OpenWrt, and connected to qemu that will proxy all TCP/UDP connections towards the Internet
(taken from mailing list post by Philip Prindeville)
On my Centos 7.4 KVM host, I did:
To provision 10 VF’s per NIC:
cat <<__EOF__ > /etc/modprobe.d/sr-iov.conf # for SR-IOV support options igb max_vfs=10 __EOF__ #
this will take effect after the next reboot… or by unloading and reloading the IGB module.
Create XML files for each NIC you want to support virtualization on:
# cat <<__EOF__ > /tmp/hostdev-net0.xml <network> <name>hostdev-net0</name> <uuid>$(uuidgen)</uuid> <forward mode='hostdev' managed='yes'> <pf dev='eno1'/> </forward> </network> __EOF__ # # cat <<__EOF__ > /tmp/hostdev-net1.xml <network> <name>hostdev-net1</name> <uuid>$(uuidgen)</uuid> <forward mode='hostdev' managed='yes'> <pf dev='eno2'/> </forward> </network> __EOF__ # ... #
Make Qemu aware of them:
# virsh net-destroy default # virsh net-define /tmp/hostdev-net0.xml # virsh net-autostart hostdev-net0 # virsh net-define /tmp/hostdev-net1.xml # virsh net-autostart hostdev-net1 # …
to create the pool of VF interfaces.
Then to add interfaces to VM’s, I did:
# cat <<__EOF__ > /tmp/new-interface-0.1.xml <interface type='network'> <mac address='52:54:00:0d:84:f4'/> <source network='hostdev-net0'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x10' function='0x0'/> </interface> __EOF__ #
where the ‘0d:84:f4’ is 3 unique bytes… I got them from:
dd status=none bs=1 count=3 if=/dev/urandom | hexdump -e ‘/1 “%x”\n"
Then to bind an interface to a VM, I did:
# virsh attach-device my-machine-1 /tmp/new-interface-0.1.xml
This will be much faster, but will only work if the architecture of your CPU is the same as the target image (here, ARM cortex-a15).
qemu-system-arm -nographic -M virt,accel=kvm -cpu host -m 64 -kernel openwrt-armvirt-zImage-initramfs
qemu-system-arm -nographic -M virt -m 64 -kernel openwrt-armvirt-zImage \ -drive file=openwrt-armvirt-root.ext4,format=raw,if=virtio -append 'root=/dev/vda rootwait'
qemu-system-arm -nographic -M virt -m 64 -kernel openwrt-armvirt-zImage \ -fsdev local,id=rootdev,path=root-armvirt/,security_model=none \ -device virtio-9p-pci,fsdev=rootdev,mount_tag=/dev/root \ -append 'rootflags=trans=virtio,version=9p2000.L,cache=loose rootfstype=9p'
# start a named machine lkvm run -k openwrt-armvirt-zImage -i openwrt-armvirt-rootfs.cpio --name armvirt0
# start with virtio-9p rootfs lkvm run -k openwrt-armvirt-zImage -d root-armvirt/
# stop "armvirt0" lkvm stop --name armvirt0
# stop all lkvm stop --all
This example uses openwrt virtualized using debian, qemu/kvm and a lex twitter system with intel atom d525 and chipset ich8m.
Normally openwrt works on most of the hardware mentioned in the table of hardware
(search in this wiki), and also on most of the hardware that support intel x86 ISA or
x86 in the address bar.
Anyway some embedded x86 board have particular hardware that is not always well supported
by the openwrt platform, even if all the
kmod packages are included in the basic image.
One of this x86 compatible hardware family are systems based on intel atom and chipset
ich8m (maybe also others), like the Lex twitter system 3I525U.
On that system openwrt is able to run but, for example, is not able to
manage really well the possibility of having two wan connections with
different metric. The request will be always routed to the interface with
higher metric also using
ping -I <wan2_interface> 126.96.36.199. Moreover software
Nmap will fail to be bind to certain interfaces.
Someone with more knowledge could explain why this happens but as workaround
one can use a more complete linux system (for example debian) as base and then virtualize
virtualization OR qemu OR kvm OR hypervisor in the address bar)
openwrt, that in the end requires really a little resources most of the time,
or one can assign plenty of resources because at the end the base system is quite powerful.
Debian was installed on a 2Gb cf card through a usb stick and netinstaller, having only the basic system utilities and ssh utilities. 1.1Gb of space were used, 600mb free and the rest swap.
Install the following packages:
apt-get install qemu-kvm bridge-utils libvirt-bin virtinst
Then, if you don't want to use any user but just work with root (the objective is: let run openwrt on the twitter system, not having a well setup debian system):
Then we have to prepare the network. Modify
/etc/network/interfaces a follows (adapt according to your needs)
auto br0 br1 br2 br3 iface br0 inet dhcp bridge_ports eth0 iface br1 inet dhcp bridge_ports eth1 iface br2 inet dhcp bridge_ports eth2 iface br3 inet dhcp bridge_ports eth3
The bridges ( https://wiki.debian.org/BridgeNetworkConnections ) are helpful because they allows different network adapters, real or virtual ( network.interfaces to exchange data (as the word 'bridge' suggests) and not only, because the bridge will have a certain mac address but also the virtual interfaces attached to it can have different mac addresses. Here the marvels of the linux networking system have to be explained by someone with more knowledge.
Then we need to create our virtual machine. The additional packages, apart from
qemu, will help here. We can issue the following command, using the x86 generic image
placed in the folder
virt-install --name=openwrt --ram=256 --vcpus=1 --os-type=linux --disk path=/root/openwrt_kvm/openwrt-x86-generic-combined-ext4.img,bus=ide --network bridge=br0,model=e1000 --import # be careful to the model, e1000 let's openwrt recognize the interface. # http://manpages.ubuntu.com/manpages/lucid/man1/virt-install.1.html
If you want to interact with the system from command line, use
For example to force the shutdown of a virtual machine
virsh destroy openwrt
or to delete the virtual machine (but not the disk file)
virsh undefine openwrt.
For having multiple interfaces
virt-install --name=openwrt --ram=256 --vcpus=1 --os-type=linux --disk path=/root/openwrt_kvm/openwrt-x86-generic-combined-ext4.img,bus=ide \ --network bridge=br0,model=e1000 --network bridge=br3,model=e1000 --import
Remember that the console requires
ctrl+5 to exit (german keyboard).
To mark a virtual machine for the autostart, type:
virsh autostart openwrt.