Differences
This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
| docs:guide-user:services:irqbalance [2022/07/19 13:15] – new section, needs expanding after further testing. palebloodsky | docs:guide-user:services:irqbalance [2024/01/02 18:48] – [Installation] mention new luci package palebloodsky | ||
|---|---|---|---|
| Line 7: | Line 7: | ||
| To get started [[docs: | To get started [[docs: | ||
| <code bash> | <code bash> | ||
| + | |||
| + | Note if you run a build from main snapshot you can also install the new '' | ||
| It will not be enabled by default. Set the following enable line to 1, save and close: | It will not be enabled by default. Set the following enable line to 1, save and close: | ||
| Line 25: | Line 27: | ||
| To set an IRQ to run on a specific CPU core, use echo to write the CPU mask, as a hexadecimal number, to the smp_affinity entry of the IRQ. In this example, we are instructing the interrupt with IRQ number 142 to run on CPU0: | To set an IRQ to run on a specific CPU core, use echo to write the CPU mask, as a hexadecimal number, to the smp_affinity entry of the IRQ. In this example, we are instructing the interrupt with IRQ number 142 to run on CPU0: | ||
| - | <code bash> | + | <code bash> |
| To set the core affinity use a bitmask, e.g.: | To set the core affinity use a bitmask, e.g.: | ||
| Line 37: | Line 39: | ||
| ==== Caution ==== | ==== Caution ==== | ||
| - | Irqbalance will result in performance benefits for multicore targets where there is enough CPU overhead to handle context switching. However on 2core targets, outside of benchmarking alone, there may be performance losses. This can happen if affinity selection is not done carefully (e.g. pinning ethernet to cpu0 and wireless to cpu1). This may result in increased latency or cpu overhead such as with simultaneous | + | Irqbalance will result in performance benefits for multicore targets where there is enough CPU overhead to handle context switching. However on 2core targets, outside of benchmarking alone, there may be performance losses. This can happen if affinity selection is not done carefully (e.g. pinning ethernet to cpu0 and wireless to cpu1). This may result in increased latency or overhead such as with simultaneous users on LAN and WLAN. Irqbalance is more viable |
| ==== Examples ==== | ==== Examples ==== | ||
| Line 139: | Line 141: | ||
| Note 2: upstream github: https:// | Note 2: upstream github: https:// | ||
| - | Note 3: discussion against irqbalance for lower core count CPUs: https:// | ||