Table of Contents

Build system essentials

Introduction

The build system is used to build OpenWrt from the source code and requires significant hardware resources, time and knowledge. You can apply custom patches and build individual packages and OpenWrt images with specific compilation flags and options. As an alternative, you can use the Image Builder to build OpenWrt images much faster and simpler at the cost of limited customization.

Prerequisites

The build system is based on a buildroot and requires a GNU/Linux environment with a case-sensitive file system. This can be achieved by running a native or a virtualized Linux distribution using VirtualBox, VMware, QEMU, etc. Also some users have positive experience with WSL and macOS, but those are not officially supported.

To generate a flashable firmware image file with default packages, you should have at least 10-15 GB of free disk space (better if more) and at least 2 GB of RAM for the compilation stage. 4 GB of RAM are required for compilation of x86 images. Doing additional optimization such as enabling LTO compile flag would also increase RAM consumption during build.

The more additional packages you add in the image, the more space is required, but the space requirements should increase slowly, most of the storage requirements are for the build infrastructure and core components of the firmware image.

Do note that these numbers are rough estimates only, you may very well be able to do it with less on some devices, but it's not guaranteed.

Description

The build system is a set of Makefiles and patches that automates the process of building a cross-compilation toolchain and then using it to build the Linux kernel, the root filesystem and possibly other pieces of software (such as uboot) required to run OpenWrt on a specific device. A typical toolchain consists of:

Usually a toolchain generates code for the same instruction set architecture (ISA) that it runs on (x86_64 in the case of most PCs and servers). However with OpenWrt this is not true. Most routers have processors that use a different architecture than the one we are using to run the build system. If we were to use our build system's toolchain to build OpenWrt for our router, it would generate code that would not work on our router. Nothing from the host system can be used. Everything, including the C standard library, the Linux kernel and all userspace programs, must be compiled with this cross-compilation toolchain.

Let's look at an example. We are building OpenWrt on an x86_64 system for a router that uses a MIPS32 architecture, so we can't use the same toolchain we use to generate programs we run on our x86_64 system. We need to first build a toolchain for the MIPS32 system, and then build all of the things that it needs to run OpenWrt using that toolchain.

The process of creating a cross compiler can be tricky. It's not something that's regularly attempted and so there's a certain amount of mystery and black magic associated with it. When you're dealing with embedded devices you'll often be provided with a binary copy of a compiler and basic libraries rather than instructions for creating your own - it's a time saving step but at the same time often means you'll be using a rather dated set of tools. It's also common to be provided with a patched copy of the Linux kernel from the board or chip vendor, but this is also dated and it can be difficult to spot exactly what has been changed to make the kernel run on the embedded platform.

While it is possible to manually create your toolchain, and then build OpenWrt with it, this is difficult and error-prone. The OpenWrt build system takes a different approach to building a firmware: it downloads, patches and compiles everything from scratch, including the cross compiler. Or to put it in simpler terms, OpenWrt's build system doesn't contain any executables or even sources. It is an automated system for downloading the sources, patching them to work with the given platform and compiling them correctly for the platform. What this means is that just by changing the template, you can change any step in the process. And of course the side benefit of this is that builds are automated, which saves time and guarantees the same result every time.

For example if a new kernel is released, a simple change to one of the Makefiles will download the latest kernel, patch it to run on the requested platform and produce a new firmware image. There's no work to be done trying to track down an unmodified copy of the existing kernel to see what changes had been made - the patches are already provided and the process ends up almost completely transparent. This doesn't just apply to the kernel, but to anything included with OpenWrt - it's this strategy that allows OpenWrt to stay on the bleeding edge with the latest compilers, kernels and applications.

Directory structure

There are four key directories in the build system:

Both the target and package steps will use the directory build_<arch> as a temporary directory for compiling.

Difference between build_dir and staging_dir

The directory build_dir is used to unpack all the source archives and to compile them in.

The directory staging_dir is used to “install” all the compiled programs into, ready either for use in building further packages, or for preparing the firmware image.

There are three areas under build_dir:

Under staging, there are also three areas:

Features

Make targets

Build sequence

  1. tools – automake, autoconf, sed, cmake
  2. toolchain/binutils – as, ld, ...
  3. toolchain/gcc – gcc, g++, cpp, ...
  4. target/linux – kernel modules
  5. package – core and feed packages
  6. target/linux – kernel image
  7. target/linux/image – firmware image file generation

Patch management

Packaging considerations

References