I have no other documentation,and i’m no network expert…so no idea about double tagging
Perhaps you can test the patch and reply on the rfc that you tested it…
Hmmm, I don’t have a QinQ or PPPoEinQ environment to test that.
Suggestion: Maybe you contact the MTK vendor’s Tech FAE to get more support and resultion.
Now I also have an experimental patch to fix roaming troubles when flow offload is in effect.
The roaming patchset will be the last 3 commits…
will you separate roaming patches from the rest?
For now I have seperated, because the main patchset is beyond RFC and roaming is still RFC.
But the roaming set needs the main set to apply correctly.
Bridge v6 and QinQ v3 sent: https://patchwork.kernel.org/project/netdevbpf/cover/[email protected]/
https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
The QinQ patch needs a better commit message, explaining it is for double ETH_P_8021Q only. I thought naming it Q-in-Q is enough, but turns out actually that is not enough. Q-in-Q could still be ETH_P_8021AD.
At least 1 change for bridge v6, but another commit Vladimir is still checking out. Switchdevs should be a ‘black box’ where it should not matter if the hardware is dsa or not, should all behave the same. But in the case of fastpath (bypassing towards a bridged switchdev port) it turns out that there is a difference between dsa and other switchdevs. This is why I use patch 11. So Vladimir is checking out why there is this difference and if so, if this patch is the correct way to go…
So a bit more patience…
The RFC roaming fix, there is no comment at all. I’ll send it as PATCH as soon as the bridge patch is accepted and applied.
mtk patch for roaming
Thanks for letting me know.
I do not think this is only a mtk problem. I believe a more general solution would be preferable, otherwise every ppe driver needs to implement its own handler listening to RTM_NEWNEIGH.
Edit:
Another problem is that when it finds a match, it only calls flow_offload_teardown(). This only marks the flow te be teared down. There is a delay before the hardware flow is actually torn down. Then there is yet another delay before the software fastpath is torn down. Only when both are torn down, traffic will continue in the correct path. That is quite a hiccup, which could be a lot shorter.
My patch-set adds a small piece of code, so that the software fastpath is rendered inoperable, when marked for tear down. It also shortens the delay of the hardware offloaded fastpath teardown.
Also, it is mtk independent, so it could work for any vendor’s hardware offloaded fastpath.
But anyway, I do appreciate @romanovj finding it and showing me the link to the mtk patch.
I just thought that there might be something interesting for you to see. I think your roaming patch is easier to understand.
Quite off-topic, but I think we could also try to polish the “netfilter: add bridging support to xt_FLOWOFFLOAD” patch and send it to upstream? It has been kept in downstream for years…
Maybe later, much later. As you can read in the answer from Pablo, already my patch-set will take a lot of time.
Btw, the mtk patch Q-in-Q has been accepted and applied to net-next.
The main patch-set will be split up.