First of all it’s important to know which version of OpenWrt you are using. I can only help you with official OpenWrt from openwrt.org, for the R4 you can download the SD card image here.
Second of all, at this point you can only obtain full 10G routing performance if you enable hardware flow offloading.
If you are using the R4 as (NAT-) router, doing this should fix the speed issues:
uci set firewall.@defaults[0].flow_offloading='1'
uci set firewall.@defaults[0].flow_offloading_hw='1'
uci commit firewall
reload_config
If you are having a bridge between the ISPs ONT and the interface your client machine is connected to you will in addition to the above also need to install bridger:
opkg update
opkg install bridger
To obtain full speed in both directions simultaneously you may need to build from source and include @Dale’s patch for utilizing more than one PPE. I will integrate it with OpenWrt once it gets accepted upstream, hopefully in a couple of days.
OT but bridger has always been a mystery to me.
From a layman’s pov what does it do when is it needed?
the term bridge is confusing to me at least. does it apply in a lan-bridge, vlan-bridge in network config? or when the bpi-r4 acts as a bridge(and not a NAT router) between ONT and the client machine?
Bridger is a Linux user space daemon which includes and eBPF tc classifier
program that offloads the bridge network datapath.
Bridger creates connection tracking flows for the connections which happen within a Linux bridge between different interfaces. In this way, those flows can be added to PPE and get offloaded in hardware instead of having to travel through the CPU.
From the cases you describe, the “lan bridge” and the “bpi-r4 acts as a bridge” are the exact same. LAN/WAN terminology doesn’t matter when all interfaces involved are bridged.
bridger is not needed for hardware-offloading bridged traffic flowing between different ports of a DSA switch, such as the 4x 1GE RJ-45 ports of the R4.
However, the SFP cages are not part of that DSA switch, but are considered separate network interfaces. Hence, the only way to offload traffic between those is via the PPE, and that works based on flows which need to be tracked.
In both cases, VLANs can differ and tagging/untagging would always be taken care of in hardware, as both, the DSA switch and the PPE support doing that.
Thank you for the crystal clear explanation.
bridger is not needed in my use case. as both sfp+ ports and dsa switch (eth0, eth1, eth2) are not bridged.
as there is no major traffic originating or terminating on the bpi-r4 in my use case. i expect when multiple ppe comes online it will have ticked all my checkbox.
yes. if you have a bridge consisting of eth1 + ports on the dsa switch.
no if eth1 is an independent routable interface and not part of the bridge consisting of lan1+lan2+lan3+lan4
this just gave me an idea that may simplify the management of the interfaces and vlans as I manually define vlans base on ports and not through the bridge interface.
with bridger, i can actually create a big bridge consisting of eth1+ 4x1Gbe then use luci to create and manage vlans… interesting…
if that bridge only consist of ports on the dsa switch meaning lan1,lan2,lan3,lan4. that dsa switch bridge’s flow is already offloaded. bridger is not needed. i tested this.
this was why i was originally puzzled on the use of bridger.
yes. ppe will capture the flow. because u are then forwarding. packets
0% cpu forwarding (hwnat) between the dsa switch bridge and eth2(sfp wan) right now. but with just one ppe. the multiple ppe will tie eth0 (dsa switch) to ppe0, eth1 to ppe1, eth2 to ppe2
from what i see from the binds on ppes, when u move packet from say sfp lan to sfp wan, 2 ppe will be used.
No. If you are routing between WAN and LAN interfaces then you don’t need bridger. Only if eth1 (or eth2) would also be members of lanbr0, then bridger would be needed for traffic between eth0 (DSA switch), eth1 and eth2 to bypass the CPU.
Yes, for that you will need it. That was the original motivation for that tool actually.
No. For to accelerate that we need RSS/LRO in the Ethernet driver. And also note that the R4 only got a single PCIe lane for the slot intended for NVMe – so that limits the speed anyway already to 8 GT/s.
i build from source and usually i update every week or so.
i have hardware flow offloading enabled in luci (checked first sw offloading, then hw offloading).
i dont know if i have a bridge between my ISPs ONT and my interface. Can i somehow figure that out? I have a symmetric 10gbit ftth connection from init7.ch
as soon as the patches are accepted, i will re-build and install and test - thanks!