Bridge v6 and QinQ v3 sent: https://patchwork.kernel.org/project/netdevbpf/cover/[email protected]/
https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
Bridge v6 and QinQ v3 sent: https://patchwork.kernel.org/project/netdevbpf/cover/[email protected]/
https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
The QinQ patch needs a better commit message, explaining it is for double ETH_P_8021Q only. I thought naming it Q-in-Q is enough, but turns out actually that is not enough. Q-in-Q could still be ETH_P_8021AD.
At least 1 change for bridge v6, but another commit Vladimir is still checking out. Switchdevs should be a ‘black box’ where it should not matter if the hardware is dsa or not, should all behave the same. But in the case of fastpath (bypassing towards a bridged switchdev port) it turns out that there is a difference between dsa and other switchdevs. This is why I use patch 11. So Vladimir is checking out why there is this difference and if so, if this patch is the correct way to go…
So a bit more patience…
The RFC roaming fix, there is no comment at all. I’ll send it as PATCH as soon as the bridge patch is accepted and applied.
mtk patch for roaming
Thanks for letting me know.
I do not think this is only a mtk problem. I believe a more general solution would be preferable, otherwise every ppe driver needs to implement its own handler listening to RTM_NEWNEIGH.
Edit:
Another problem is that when it finds a match, it only calls flow_offload_teardown(). This only marks the flow te be teared down. There is a delay before the hardware flow is actually torn down. Then there is yet another delay before the software fastpath is torn down. Only when both are torn down, traffic will continue in the correct path. That is quite a hiccup, which could be a lot shorter.
My patch-set adds a small piece of code, so that the software fastpath is rendered inoperable, when marked for tear down. It also shortens the delay of the hardware offloaded fastpath teardown.
Also, it is mtk independent, so it could work for any vendor’s hardware offloaded fastpath.
But anyway, I do appreciate @romanovj finding it and showing me the link to the mtk patch.
I just thought that there might be something interesting for you to see. I think your roaming patch is easier to understand.
Quite off-topic, but I think we could also try to polish the “netfilter: add bridging support to xt_FLOWOFFLOAD” patch and send it to upstream? It has been kept in downstream for years…
Maybe later, much later. As you can read in the answer from Pablo, already my patch-set will take a lot of time.
Btw, the mtk patch Q-in-Q has been accepted and applied to net-next.
The main patch-set will be split up.
Seems not all your patches in v9 are in your 3x v10 series. They will be sent in the future?
Yes they will. But not all at once.
I’m preparing the next versions of the patch-sets. Some more fixes and a better test script. After 7 april can send them.
Follow the branch bpir-nftflow-nf-next
A new version of the patch-sets in nf-next.
I’ve written a test script for testing the forward- and bridge-fastpath.
It can run with veth-devices, but also using real hardware. Very useful in testing the fastpath on real live dsa or other switchdev ports. It is also able to test the hardware offloaded fastpath.
Finally can test HW_OFFLOAD, using only 1 board (instead of 3 boards)
See the description:
Hopefully it will help speed up the reviewing process.
@ericwoud Btw there’s another patch from Felix, which is a fix for a bug and has been put in openwrt code. But Felix didn’t improve the patch according to the review nor reply to my ping, can you polish and push it to upstream?
I do not know enough about this subject to send a bugfix.
Perhaps you can follow the suggestion and send it?
I sent v2 here, but I don’t know enough about this either ![]()
@ericwoud What’s current mainline status of your patches? Sorry to bother but it’s quite difficult to track them since they are separated to different series.
There is an issue that needs some solution, so it is halted. Soon I will ask how the maintainers of netfilter want to continue, see if Florian’s question can get an answer, if they want to continue at all I was on holidays, so it has been a while nothing happened.
@ericwoud can you reproduce this issue when using WED with your patch? Bridger internet issues when roaming between multiple APs with bridger installed · Issue #6 · nbd168/bridger · GitHub
I’m curious if it’s bridger’s issue or WED’s issue.
You can try applying these 2:
netfilter: flow: Add bridge_vid member
netfilter: nf_flow_table_core: teardown direct xmit when destination changed
It handles removing the fastpath flow entry when a fdb entry is deleted, including deletion when roaming.
I can’t try this now, but I sent these patches to original issue to look for someone to test them.
Ok, but I think this is not the final patch, it will change some more…