BPI-R2 slow ethernet speed

Important info that this issue only appears only if more than 1 lanport is connected as my tests are only over one lan/wan-port or from lan to wan.

On my main-router i also have only 1 lan connected because this lan is a trunk to managed switch there my other devices are connected

@Ryder.Lee / @moore / @linkerosa / @Jackzeng can bpi-r2 operate without dsa-driver in 4.14+ like it works in 4.4 (eth1 fixed to wan,eth0 to lan-ports)

As I mentioned in the other thread, I’m getting slow speeds. I’ll put my network configuration details in here.

My main router is a Netgear WNDR3800 running OpenWRT. I was testing out the banana pi r2, hoping it could replace the netgear. The netgear has an internal IP address of 192.168.15.1. It connects to my ISP using PPPOE. I posted details of its configuration on the OpenWRT board.

One of the lan ports of my router is connected to a switch. I am running the test between 2 computers. One of the computers, mihoshi is connected to that switch, and has an IP of 192.168.15.2. Because my main internet connection goes through PPPoE, mihoshi’s MTU is 1492.

The wan port of the Banana PI R2 is connected to the switch. The other computer, ryoko is connected to lan3 of the Banana Pi R2. The Banana Pi is performing a NAT translation between the two computers. I’ve set ryoko’s MTU to 1492 to match mihoshi’s.

Here’s how I’m measuring the speed:

$ ssh [email protected] 'cat /dev/zero' | pv >/dev/null
^C02GiB 0:01:29 [37.3MiB/s]

I think mihoshi and ryoko’s network settings are correct, because the gets about 90MiB/s when I use an expressobin instead of the banana pi.

Here is the configuration of my banana pi:

root@bpi-r2:~# uname -a
Linux bpi-r2 4.14.80-bpi-r2-main #177 SMP Sun Nov 11 10:03:58 CET 2018 armv7l GNU/Linux
root@bpi-r2:~# iptables-save 
# Generated by iptables-save v1.6.0 on Sun Jan 20 09:26:23 2019
*filter
:INPUT ACCEPT [8827:730234]
:FORWARD ACCEPT [54709:25786012]
:OUTPUT ACCEPT [9931:1198866]
COMMIT
# Completed on Sun Jan 20 09:26:23 2019
# Generated by iptables-save v1.6.0 on Sun Jan 20 09:26:23 2019
*nat
:PREROUTING ACCEPT [6742:849916]
:INPUT ACCEPT [3444:241724]
:OUTPUT ACCEPT [2492:166578]
:POSTROUTING ACCEPT [11:1615]
-A POSTROUTING -o wan -j MASQUERADE
COMMIT
# Completed on Sun Jan 20 09:26:23 2019
root@bpi-r2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 3e:8c:11:69:29:8e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3c8c:11ff:fe69:298e/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:b1:52:1b:25:98 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::50b1:52ff:fe1b:2598/64 scope link 
       valid_lft forever preferred_lft forever
4: wan@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 10000
    link/ether 52:b1:52:1b:25:98 brd ff:ff:ff:ff:ff:ff
    inet 192.168.15.140/24 brd 192.168.15.255 scope global wan
       valid_lft forever preferred_lft forever
    inet6 2602:ae:1592:e100:50b1:52ff:fe1b:2598/64 scope global mngtmpaddr dynamic 
       valid_lft forever preferred_lft forever
    inet6 fe80::50b1:52ff:fe1b:2598/64 scope link 
       valid_lft forever preferred_lft forever
5: lan0@eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master br0 state LOWERLAYERDOWN group default qlen 1000
    link/ether 3e:8c:11:69:29:8e brd ff:ff:ff:ff:ff:ff
6: lan1@eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master br0 state LOWERLAYERDOWN group default qlen 1000
    link/ether 3e:8c:11:69:29:8e brd ff:ff:ff:ff:ff:ff
7: lan2@eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master br0 state LOWERLAYERDOWN group default qlen 1000
    link/ether 3e:8c:11:69:29:8e brd ff:ff:ff:ff:ff:ff
8: lan3@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether 3e:8c:11:69:29:8e brd ff:ff:ff:ff:ff:ff
9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3e:8c:11:69:29:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/24 brd 192.168.2.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::3c8c:11ff:fe69:298e/64 scope link 
       valid_lft forever preferred_lft forever
root@bpi-r2:~# lsmod
Module                  Size  Used by
iptable_filter         16384  0
mtkhnat                24576  0
ipt_MASQUERADE         16384  1
nf_nat_masquerade_ipv4    16384  1 ipt_MASQUERADE
iptable_nat            16384  1
nf_conntrack_ipv4      16384  2
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
nf_nat_ipv4            16384  1 iptable_nat
nf_nat                 32768  2 nf_nat_masquerade_ipv4,nf_nat_ipv4
nf_conntrack          126976  5 nf_conntrack_ipv4,ipt_MASQUERADE,nf_nat_masquerade_ipv4,nf_nat_ipv4,nf_nat
bridge                151552  0
mtk_thermal            16384  0
thermal_sys            61440  1 mtk_thermal
spi_mt65xx             20480  0
pwm_mediatek           16384  0
mt6577_auxadc          16384  0
nvmem_mtk_efuse        16384  0
mtk_pmic_keys          16384  0
rtc_mt6397             16384  1
ip_tables              24576  2 iptable_filter,iptable_nat
x_tables               28672  3 ip_tables,iptable_filter,ipt_MASQUERADE
ipv6                  409600  23 bridge

Since you said you had difficulty reproducing this, I ran a speed test to the internet from ryoko. Strangely, I am getting 355Mb/s download and 308Mb/s upload, which is much higher than I expected considering the problems talking to a much closer computer. When ryoko connects directly to the switch, it gets 553Mb/s download and 451Mb/s upload.

netgear (mainrouter): 192.168.15.1 connected to wan, right?
bpi-r2: wan: 192.168.15.140/24, br0 (all lan-ports): 192.168.2.1/24, NAT on wan
mihoshi (client/ssh-target): 192.168.15.2 => seems to be wrong
ryoko (other client): x.x.x.x ?

your lans are bridged together to br0 with 192.168.2.1/24 so your mihoshi & ryoko should have 192.168.2.x/24

from which system do you make the ssh? i guess from r2 to your client…then bottleneck could be r2’s cpu

Ryoko is using DHCP from the banana Pi. Currently it has address 192.168.2.135. The wan port of the banana pi connects to a switch. Mihoshi does not connect to the banana pi. It connects directly to the switch.

Ryoko and Mihoshi’s networks are not bridged through br0. It goes through NAT translation in the banana pi.

[kyle@ryoko ~]$ ifconfig
enp0s18f2u4: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 00:24:9b:06:06:9c  txqueuelen 1000  (Ethernet)
        RX packets 780962  bytes 985254421 (939.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 778884  bytes 704802735 (672.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp4s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1492
        inet 192.168.2.135  netmask 255.255.255.0  broadcast 192.168.2.255
        inet6 fe80::7f9b:6f9d:729d:78c9  prefixlen 64  scopeid 0x20<link>
        ether bc:5f:f4:af:1b:83  txqueuelen 10000  (Ethernet)
        RX packets 850718674  bytes 1219355660935 (1.1 TiB)
        RX errors 4  dropped 60  overruns 0  frame 6
        TX packets 247260591  bytes 146336285643 (136.2 GiB)
        TX errors 2  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 104706516  bytes 220974209428 (205.7 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 104706516  bytes 220974209428 (205.7 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
[kyle@ryoko ~]$ ssh [email protected] 'dd if=/dev/zero of=/dev/stdout bs=$((1024 * 1024)) count=1000' >/dev/null
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 27.7315 s, 37.8 MB/s
[kyle@ryoko ~]$ dd if=/dev/zero of=/dev/stdout bs=$((1024 * 1024)) count=1000 | ssh [email protected] 'cat >/dev/null' 
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 9.3078 s, 113 MB/s

interesting…that means that problem is not encryption, but generation of data via dd…but in both commands data is not generated on r2…

192.168.15.2 seems to have problens generating the data, or it is direction-related…ryoko to 192.168.15.2 seems to work well, the opposite direction is bad

1 Like

Create an NFS share, use FTP - you have to rule out encryption overhead. Given that multiple reports seem to indicate faster ‘in’ than ‘out’ - then to my eyes the biggest culprit here is those 4 configurable LAN ports - the way they are exposed as ‘virtual’ interfaces and their shared lane to the CPU.

Maybe it’s the actual hardware - it’s likely software - but clearly there’s some kind of round-robin algorithm on some level that is causing an N^2 decrease in performance in one direction. That means there’s SOMETHING - a process, some code - something that’s operating per port and causes a delay, per-port that is proportional to the total number of used ports.

for each port do

{PROBLEM IS HERE}

end

I’m working on the lima/mali stuff right now - I’ll try to fix this eventually. We’ll fix it like you optimize a game engine - we’ll profile every single call on the kernel side that is MTK specific and networking specific.

Run one interface - benchmark, run two - etc etc

See which calls are taking up a relatively larger amount of time as you transfer one way, the other, and with one, two, three etc connected machines.

That’ll track down this problem - we’ll see the culprit code or we’ll determine a hardware limitation - but we’ll find it.

EDIT : I’m not saying that’s easy - but presumably we can generate valgrind data - if we can’t? then we can just do our own timing routines and dump to a log - whatever. N^2 issues will stick out like a sore thumb during profiling.

It’s not due to encryption. I’ve been trying out different routers. If I replace the banana pi with an espressobin, the same benchmarks between the two computers get 90MiB/s.

I did some more tests. As far as I can tell, the router is slow with incompressible data, but that doesn’t make any sense. I’ve never heard of a NAT trying to do compression.

I get fast speeds sending all 0s:

[kyle@mihoshi ~]$ dd if=/dev/zero of=/dev/stdout bs=$((1024 * 1024)) count=1000 | socat -u STDIN TCP4-LISTEN:25566,reuseaddr
[kyle@ryoko ~]$ socat -u TCP4:192.168.15.2:25566 STDOUT | pv >/dev/null
1000MiB 0:00:10 [94.8MiB/s] [

I get fast speeds sending all 1s:

[kyle@mihoshi ~]$ dd if=/dev/zero of=/dev/stdout bs=$((1024 * 1024)) count=1000 | tr '\000' '\377'  | socat -u STDIN TCP4-LISTEN:25566,reuseaddr

[kyle@ryoko ~]$ socat -u TCP4:192.168.15.2:25566 STDOUT | pv >/dev/null
1000MiB 0:00:10 [94.0MiB/s] [  

However, sending a zip file (incompressible data) is slow:

[kyle@mihoshi ~]$ dd if=/backup/dimension_zero/auto_backup/2018-12-14-13-01-23.zip of=/dev/stdout bs=$((1024 * 1024)) count=1000 | socat -u STDIN TCP4-LISTEN:25566,reuseaddr
[kyle@ryoko ~]$ socat -u TCP4:192.168.15.2:25566 STDOUT | pv >/dev/null
1000MiB 0:00:29 [33.9MiB/s]

This shows that reading the file is not slow (the OS already cached it before I ran this test or the previous test, and I reran the network test afterwards too and got the same number):

[kyle@mihoshi ~]$ dd if=/backup/dimension_zero/auto_backup/2018-12-14-13-01-23.zip of=/dev/stdout bs=$((1024 * 1024)) count=1000 | cat >/dev/null
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.359988 s, 2.9 GB/s

I tried having the router receive the data, which would skip the NAT part. It does better:

[kyle@mihoshi ~]$ dd if=/backup/dimension_zero/auto_backup/2018-12-14-13-01-23.zip of=/dev/stdout bs=$((1024 * 1024)) count=1000 | socat -u STDIN TCP4-LISTEN:25566,reuseaddr

root@bpi-r2:~# socat -u TCP4:192.168.15.2:25566 STDOUT | pv >/dev/null
1000MiB 0:00:14 [68.5MiB/s] [

Probably the tcp buffers aren’t optimized well on the router yet (which makes no different for it’s ability to NAT). Increasing the socat transfer size worked around this. I went back and tried a larger transfer size on ryoko, and it made no difference.

[kyle@mihoshi ~]$ dd if=/backup/dimension_zero/auto_backup/2018-12-14-13-01-23.zip of=/dev/stdout bs=$((1024 * 1024)) count=1000 | socat -u STDIN TCP4-LISTEN:25566,reuseaddr

root@bpi-r2:~# socat -b$((1024 * 1024)) -u TCP4:192.168.15.2:25566 STDOUT | pv >/dev/null
1000MiB 0:00:11 [87.3MiB/s] [
2 Likes

I did a tcpdump on ryoko and mihoshi during the speed test. Some packets from mihoshi to ryoko are dropped. Ryoko sees this and reduces the TCP window size. Packets are periodically dropped during the rest of the connection as it tries to zero in on the max speed.

Interestingly the 4th through 8th packets or so are dropped. I would have expected the router to have a large enough buffer to hold those packets and drop later packets when the buffer overflows.

1 Like

@Ryder.Lee have you an idea how to debug this?

@bluelightning32 - thank you very much for the extensive test results. Unusual behavior! nothing obvious springs to mind, the relative performance is definitely indicative of something, it narrows down the possibilities for sure. It ‘feels’ like a bus contention issue or some kind of buffer-overflow problem as you suggest - but it’s nothing that I’ve personally encountered before with networking gear

Glad see the damn breaking on this issue. I had to resort to other hardware to keep my project going but would circle back to the R2 if someone needs a solution tested here. bluelightning32’s results appear consistent with with problems/experiments I did.

Any updates on this?

I’m observing a similar problem, but much worse than @bluelightning32. When downloading through the Masquerading of the BPI R2 I only get a few hundreds of kbit/s. This makes the R2 not usable as a router for me right now.

When downloading to the Pi itself or uploading through it the speed is ok, though. I can also confirm, that there is a difference between multiple kinds of data. With if=/dev/zero it is fast, but with if=/dev/urandom it is slow. Also just downloading files through it (e.g. debian dvd iso) is around the same few hundreds of kbit/s.

I’ve tried with the official (https://github.com/BPI-SINOVOIP/BPI-R2-bsp) and @frank-w s (https://github.com/frank-w/BPI-R2-4.14) kernel and also with hwnat(http://www.fw-web.de/dokuwiki/doku.php?id=en:bpi-r2:hwnat) on.

I will be doing some more detailed tests later this week and will post the output as well as my configuration here.

disable eth0 tso with ethtool and see if it helps the issue as following commands

ethtool -K eth0 tso off

nope, that does not help

Does not seem to help me either. I think it made it worse for me, tried to undo it by calling ethtool -K eth0 tso on, but that didn’t seem to help, so maybe it got worse because of something else.

I also finally got to doing some measurements. My setup is:

Server <–> [WAN] BananaPi [LAN] <–> Laptop

Currently the LAN is bridged with another lan on the Pi and hwnat is on. But I tried earlier without these and it didn’t seem to help. If I have time I will run the measurements again with a fresh bare minimum setup.

I ran three different tests:

  1. dd if=/dev/zero | pv | netcat 9999 and on the other side netcat -lp 9999 | pv | dd of=/dev/null
  2. dd if=/dev/urandom | pv | netcat 9999 and on the other side the same as above
  3. Hosting a debian netinst image with apache2 and wget to download it.

So here are the numbers:

Server <—> Pi

netcat zero

Server -> Pi

Roughly 60MB/s -> 480MBit/s

Pi -> Server

Roughly 65MB/s -> 520MBit/s

netcat urandom

Server -> Pi

Roughly 7MB/s -> 56MBit/s

Pi -> Server

Roughly 9MB/s -> 72MBit/s

wegt debian iso

Server -> Pi

Roughly 6MB/s -> 48MBit/s

Pi -> Server

Roughly 109MB/s -> 872MBit/s

Pi <—> Laptop

netcat zero

Pi -> Laptop

Roughly 0.6MB/s -> 4.8MBit/s

Laptop -> Pi

Roughly 80MB/s -> 640MBit/s

netcat urandom

Pi -> Laptop

Roughly 0.0001MB/s -> 0.0008MBit/s

Laptop -> Pi

Roughly 60MB/s -> 480MBit/s

wegt debian iso

Pi -> Laptop

Roughly 0.1MB/s -> 0.8MBit/s Getting slower a lot after a few seconds

Laptop -> Pi

Roughly 15MB/s -> 120MBit/s

Server <–Pi–> Laptop

netcat zero

Server -> Laptop

Roughly 12MB/s -> 96MBit/s

Laptop -> Server

Roughly 55MB/s -> 440MBit/s

netcat urandom

Server -> Laptop

To slow to measure

Laptop -> Server

Roughly 55MB/s -> 440MBit/s

wegt debian iso

Server -> Laptop

To slow to measure

Then I investigated the to slow to measure netcat a bit. I tried setting the blocksize of dd to 100 bytes. Then around 3000 to 5000 bytes went through before it stopped. When setting the blocksize to 10 bytes around 20000 went through.

Keep in mind, something may be wrong with my settings as it was not that bad before(at least some kbit/s instead of to slow to measure). As I said i will try with a fresh image once I have time.

I did some similar speed tests cu the BPI-R2 device and on the wan/ppp0 interface a get at most 250Mbps. The tests were done using latest kernel 4.19.27 and 4.14.104 (Frank github repo) with HW NAT enabled for 4.14 but no difference in speed. I’m using Debian 10 Buster.

I have a 1Gbps Internet connection.

With my old gigabit TP-link 1043nd v2 router running DD-WRT with HW NAT enabled I get over 900Mbps on the speed tests. I’ll try and put the latest OpenWRT image 18.06 and check if I get better speeds.

Just saw in the log some crashes and some timeouts related to the network interfaces.

    Mar 09 00:36:17 bpi-r2 kernel: WARNING: CPU: 3 PID: 0 at net/sched/sch_generic.c:320 dev_watchdog+0x27c/0x280
Mar 09 00:36:17 bpi-r2 kernel: NETDEV WATCHDOG: eth0 (mtk_soc_eth): transmit queue 0 timed out
Mar 09 00:36:17 bpi-r2 kernel: Modules linked in: ipt_MASQUERADE nf_nat_masquerade_ipv4 xt_TCPMSS xt_tcpmss xt_tcpudp aes_arm_bs crypto_simd cryptd tun des_generic ip6table_nat nf_conntr
Mar 09 00:36:17 bpi-r2 kernel: CPU: 3 PID: 0 Comm: swapper/3 Tainted: G           O    4.14.104-bpi-r2-main #1
Mar 09 00:36:17 bpi-r2 kernel: Hardware name: Mediatek Cortex-A7 (Device Tree)
Mar 09 00:36:17 bpi-r2 kernel: [<c0113040>] (unwind_backtrace) from [<c010d2ec>] (show_stack+0x20/0x24)
Mar 09 00:36:17 bpi-r2 kernel: [<c010d2ec>] (show_stack) from [<c0aac344>] (dump_stack+0x90/0xa4)
Mar 09 00:36:17 bpi-r2 kernel: [<c0aac344>] (dump_stack) from [<c0125d34>] (__warn+0xf8/0x110)
Mar 09 00:36:17 bpi-r2 kernel: [<c0125d34>] (__warn) from [<c0125d94>] (warn_slowpath_fmt+0x48/0x50)
Mar 09 00:36:17 bpi-r2 kernel: [<c0125d94>] (warn_slowpath_fmt) from [<c08deca8>] (dev_watchdog+0x27c/0x280)
Mar 09 00:36:17 bpi-r2 kernel: [<c08deca8>] (dev_watchdog) from [<c019b194>] (call_timer_fn+0x50/0x198)
Mar 09 00:36:17 bpi-r2 kernel: [<c019b194>] (call_timer_fn) from [<c019b3c8>] (expire_timers+0xec/0x14c)
Mar 09 00:36:17 bpi-r2 kernel: [<c019b3c8>] (expire_timers) from [<c019b4d0>] (run_timer_softirq+0xa8/0x1c0)
Mar 09 00:36:17 bpi-r2 kernel: [<c019b4d0>] (run_timer_softirq) from [<c01015dc>] (__do_softirq+0xec/0x370)
Mar 09 00:36:17 bpi-r2 kernel: [<c01015dc>] (__do_softirq) from [<c012c800>] (irq_exit+0xe4/0x14c)
Mar 09 00:36:17 bpi-r2 kernel: [<c012c800>] (irq_exit) from [<c0180e34>] (__handle_domain_irq+0x70/0xc4)
Mar 09 00:36:17 bpi-r2 kernel: [<c0180e34>] (__handle_domain_irq) from [<c01014a8>] (gic_handle_irq+0x5c/0xa0)
Mar 09 00:36:17 bpi-r2 kernel: [<c01014a8>] (gic_handle_irq) from [<c010df8c>] (__irq_svc+0x6c/0x90)
Mar 09 00:36:17 bpi-r2 kernel: Exception stack(0xde947f38 to 0xde947f80)
Mar 09 00:36:17 bpi-r2 kernel: 7f20:                                                       00000000 00061e34
Mar 09 00:36:17 bpi-r2 kernel: 7f40: 1e51e000 c011f720 de946000 c1103ccc c1103c6c c11bc91f c0d2bd68 00000001
Mar 09 00:36:17 bpi-r2 kernel: 7f60: 00000000 de947f94 de947f98 de947f88 c0109818 c010981c 600e0013 ffffffff
Mar 09 00:36:17 bpi-r2 kernel: [<c010df8c>] (__irq_svc) from [<c010981c>] (arch_cpu_idle+0x48/0x4c)
Mar 09 00:36:17 bpi-r2 kernel: [<c010981c>] (arch_cpu_idle) from [<c0ac77f8>] (default_idle_call+0x30/0x3c)
Mar 09 00:36:17 bpi-r2 kernel: [<c0ac77f8>] (default_idle_call) from [<c016d808>] (do_idle+0xdc/0x14c)
Mar 09 00:36:17 bpi-r2 kernel: [<c016d808>] (do_idle) from [<c016db14>] (cpu_startup_entry+0x28/0x2c)
Mar 09 00:36:17 bpi-r2 kernel: [<c016db14>] (cpu_startup_entry) from [<c011092c>] (secondary_start_kernel+0x170/0x194)
Mar 09 00:36:17 bpi-r2 kernel: [<c011092c>] (secondary_start_kernel) from [<801018ec>] (0x801018ec)
Mar 09 00:36:17 bpi-r2 kernel: ---[ end trace 593bb1a22e52195d ]---
Mar 09 00:36:17 bpi-r2 kernel: mtk_soc_eth 1b100000.ethernet eth0: transmit timed out
Mar 09 00:36:17 bpi-r2 kernel: mtk_soc_eth 1b100000.ethernet eth1: transmit timed out

Is this crash in debian/openwrt and which kernel 4.14 or 4.19? Same behaviour with the other kernel/system?

This crash is from Debian running kernel 4.14.104. I will try and see if I can replicate same crash with kernel 4.19.27.