[bpi-r2] network interrupts

Hi,

I made some testing with streaming and see while routing between lan and wan/pppoe interrupts counting up only on cpu0. It stops from time to time.i have not much cpu usage in top,but it seems that interrupts causing this

Afair there was a way ro change cpu or maybe use all cpus…but don’t remember how. Afaik it was called affinity,but found only the crypto driver thread

$ cat /proc/interrupts | grep eth
240:  147961822          0          0          0  MT_SYSIRQ 199 Level     1b100000.ethernet
241:  136396797          0          0          0  MT_SYSIRQ 198 Level     1b100000.ethernet

so only first cpu is used…some more interrupts operating only on cpu0. i found no dropping or similar on networking interfaces i use for this (eth0, lan0 and wan), vlan and pppoe had no statictics in ethtool

i tried moving irq to different CPU (counting not clear)

echo 2 > /proc/irq/240/smp_affinity #2 moves to cpu1
echo 4 > /proc/irq/241/smp_affinity #4 moves to cpu2

[14:22] root@bpi-r2-e:~ (349)# cat /proc/interrupts | grep eth
240:  147967496       1210          0          0  MT_SYSIRQ 199 Level     1b100000.ethernet
241:  136403870          0        728          0  MT_SYSIRQ 198 Level     1b100000.ethernet

after ~1 minute i see these (~1000 interupts /min)…isn’t this much??

i guess the interrupts are caused by NAT…my structure is wan + vlan + pppoe with NAT => routed to lan0 through iptables (which may also cause CPU-load/interrupts)

Do you mean this? :slight_smile:

Have you done so.e tests with it? RPS is maybe causing transmit timeout issue.

How does it manifest? I haven’t been using R2 much lately, but starting yesterday, I’ve been using it to transfer files and I haven’t experienced any timeouts. It is a gateway between NAS and PC and it has been stable so far.

transmit timeout is logged to dmesg with backtrace

imho somewhere in this thread it was mentioned that RPS causes this timeout, but i don’t find it right now

So you need to restart after timeout occurs? Timeout does not sound to me like a critical error that needs restart, but I remember that I had to restart R2, because it stopped responding.

I have not seen it long time (afaik only with second gmac patches before phylink),but some users did.afaik network was broken after it

After 20 days I had not had any crashes with RPS

can you give me your rps (and maybe other) config?

which kernel do you use?

Hi, I basically use any other core than the first core.

So in my RPS settings for different interfaces, I have 2 and 4. Checkout the interfaces config.

I used Debian kernel from your repo built from source and then only upgraded.

I have moved interrupts to second and third cpu but not yet changed rps…as far as i see you moved wan (2) and lanbridge (4) via post-up

post-up echo 2 > /sys/class/net/wan/queues/rx-0/rps_cpus

post-up echo 4 > /sys/class/net/br0/queues/rx-0/rps_cpus

edit: tried your settings, but still have issues on streaming (hang every minute)

echo 2 > /proc/irq/240/smp_affinity #2
echo 4 > /proc/irq/241/smp_affinity
echo 2 > /sys/class/net/lan0/queues/rx-0/rps_cpus
echo 4 > /sys/class/net/lanbr0/queues/rx-0/rps_cpus

#for wan ppp8 => wan.140
echo 2 > /sys/class/net/wan/queues/rx-0/rps_cpus
echo 2 > /sys/class/net/ppp8/queues/rx-0/rps_cpus
echo 2 > /sys/class/net/wan.140/queues/rx-0/rps_cpus

it turns out that issue was caused by to big packets (MTU1500 vs. pppoe 1492) and my SmartTV ignored the MTU broadcast from dnsmasq (which was working on laptop)

i added a MSS-Fix to my nftables

chain FORWARD {
    type filter hook forward priority 0; policy drop;
    #https://wiki.nftables.org/wiki-nftables/index.php/Mangling_packet_headers
    #MSS fix for pppoe 1500 - 8 (pppoe) - 20 (ipv4) - 20 (TCP)
    oifname $ifwan tcp flags syn tcp option maxseg size set 1452
    #oifname $ifwan tcp flags syn tcp option maxseg size set rt mtu

and now it is working so far.

while changing smp_affinity i noticed that irq-numbers have changed from 5.4 to 5.10

IntArray=( $(cat /proc/interrupts | grep eth | awk '{sub(/[^0-9]/, "", $1);print $1}') )
#echo "first IRQ:"${IntArray[0]}
#echo "second IRQ:"${IntArray[1]}
echo 2 > /proc/irq/${IntArray[0]}/smp_affinity #2
echo 4 > /proc/irq/${IntArray[1]}/smp_affinity
1 Like