Strange packet loss

Just wanted to toss this out there in case anyone else experiences it. This is an intermittent issue that so far has been really hard to nail down because there are not really any logs or errors being generated that are helpful.

Once in a while after cold boot from power on network performance is just awful, on the LAN ports. I am running Frank’s 2.4.16 kernel on Slackware arm current for that is worth.

If you run tcpdump on a system communicating with the BananaPi-R2 you see tons and tons TCP re-transmission. I have verified all the other network hardware - by taking it out of the picture and cabling directly to the port on the Pi. Same behavior as with a switch between it and the rest of the network. Its like packets are just disappearing…

There are no errors reported by ifconfig on any interfaces, nor under runs. There is nothing in syslog to indicate anything is wrong. Nothing in dmesg, that differs from a working boot. All daemons start normally etc. The only thing logged is NFS errors if an NFS share is mounted you can see log events like “Sent X only acknowledged Y giving up.”

The problem can be detected immediately just run something like dmesg via SSH and if it takes time to scroll the text the issue is occurring, normally the buffer can be written out instantly. No performance issues can be observed on the physical console and CPU utilization is normal.

This has never occurred after a “reboot” only on cold start. A simple reboot has corrected the problem the times I have tried that without doing anything else. This has happened four times now in I think 25 power cycles. I only started making notes after the second time. I have left the device running for 9 days following a successful boot and there are no issues. So if it starts up okay its stable. Given the later, this isnt a huge issue for me but I wanted to put this out that in case anyone else sees it.

Are the retransmitted packets only from nfs? Or different type? which kernel do you use exactly and which image (i don’t know a slackware-image)? are there damaged packets incoming/outgoing?

Maybe apply ethtool -S to eth0, eth1, lan0 to lan3, and wancan show more interesting information about the strange behavior.
For example:
ethtool -S eth0
ethtool -S lan0
Below frank-w’s thread shows the example output of the command.

Its not just NFS its everything as far as I can tell. I was see the loss on other deamons like ssh as well. As I stated it intermittent. I was running tcpdump on the remote side and not seeing any broken packets but I don’t recall if I did that without the switch in the picture or not. Its likely the switch simply did not forward broken frames, if they were present.

I am testing a DVB daemon here we have ported to ARM. Its long running and I am currently profiling it to make sure one its stable to two its not leaking memory. So I have not been restart the R2 lately. Unfortunately I have to test on the real hardware because I can’t pass the USB DVB sticks to qemu emulating ARM. I’ll look at this more when I am able to restart the R2 again.

Alright I am still seeing this. Can anyone else confirm they are getting good TCP thruput? I ask because the problem is pretty consistent. It does not matter if its NFS, Samba, ssh or even just plan old netcat with /dev/zero piped to it. It also does not seem to matter if its high data rate or not based on what I have observed in wireshark. I’ll see lost segments and a dupack followed by a fast retransmission on both low rate interactive ssh traffic, or large file transfer; even something like this to take sata I/O and ssh CPU usage out of the mix.

dd if=/dev/zero status=progress | nc 192.168.16.1 3000

Additionally it does not matter if I am testing form my PC with a switch between me and the R2 or if I take a laptop and cable up to the port directly. Same issues.

I went back to 4.14 - using Frank-Ws 14.14.59 source pulled from his git about a day ago. I wanted to see if the 2nd gmac stuff might make and difference. It does not seem to.

I have mostly been running tcpdump/wireshark on the remote machines as I storing the pcap would strain the I/O and possible complicate the issue by hitting the CPU on the R2. I have tinkered with turning off tso,gso,and tc on the R2’s interfaces, thinking it could be a offload issue. It seems to make little or no difference. I have been doing all my testing with a completely flushed ip tables and the default chains all set to a policy of ACCEPT.

As stated before nothing in terms of errors in dmesg or any logs. You’d think nothing was wrong until you look at the thruput on any application and start digging in with wireshark.

Not that its all that interesting but here the interface info:

6: lan0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP group default qlen 1000

link/ether de:ad:be:ef:00:ff brd ff:ff:ff:ff:ff:ff

inet 192.168.16.254/24 brd 192.168.16.255 scope global lan0

   valid_lft forever preferred_lft forever`Preformatted text`

root@router:/home/geoff# ethtool -S eth1

NIC statistics:

 tx_bytes: 57422258

 tx_packets: 373122

 tx_skip: 0

 tx_collisions: 0

 rx_bytes: 426676705

 rx_packets: 428954

 rx_overflow: 0

 rx_fcs_errors: 0

 rx_short_errors: 0

 rx_long_errors: 0

 rx_checksum_errors: 0

 rx_flow_control_packets: 0

 p05_TxDrop: 0

 p05_TxCrcErr: 0

 p05_TxUnicast: 375996

 p05_TxMulticast: 52893

 p05_TxBroadcast: 65

 p05_TxCollision: 0

 p05_TxSingleCollision: 0

 p05_TxMultipleCollision: 0

 p05_TxDeferred: 0

 p05_TxLateCollision: 0

 p05_TxExcessiveCollistion: 0

 p05_TxPause: 0

 p05_TxPktSz64: 0

 p05_TxPktSz65To127: 113442

 p05_TxPktSz128To255: 15263

 p05_TxPktSz256To511: 8052

 p05_TxPktSz512To1023: 8189

 p05_Tx1024ToMax: 284008

 p05_TxBytes: 428392521

 p05_RxDrop: 0

 p05_RxFiltering: 0

 p05_RxMulticast: 21

 p05_RxBroadcast: 3

 p05_RxAlignErr: 0

 p05_RxCrcErr: 0

 p05_RxUnderSizeErr: 0

 p05_RxFragErr: 0

 p05_RxOverSzErr: 0

 p05_RxJabberErr: 0

 p05_RxPause: 0

 p05_RxPktSz64: 21572

 p05_RxPktSz65To127: 310101

 p05_RxPktSz128To255: 13032

 p05_RxPktSz256To511: 7768

 p05_RxPktSz512To1023: 6669

 p05_RxPktSz1024ToMax: 13980

 p05_RxBytes: 57422258

 p05_RxCtrlDrop: 0

 p05_RxIngressDrop: 0

 p05_RxArlDrop: 0
indent preformatted text by 4 spaces

root@router:/home/geoff# ethtool -S eth0

NIC statistics:

 tx_bytes: 25287868402

 tx_packets: 17922424

 tx_skip: 0

 tx_collisions: 0

 rx_bytes: 412328485

 rx_packets: 5269464

 rx_overflow: 0

 rx_fcs_errors: 0

 rx_short_errors: 0

 rx_long_errors: 0

 rx_checksum_errors: 0

 rx_flow_control_packets: 24970

root@router:/home/geoff# ethtool -i eth0 driver: mtk_soc_eth

version: 

firmware-version: 

expansion-rom-version: 

bus-info: 1b100000.ethernet

supports-statistics: yes

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: no

root@router:/# ethtool -i lan0

driver: dsa

version: 

firmware-version: N/A

expansion-rom-version: 

bus-info: platform

supports-statistics: yes

supports-test: no

supports-eeprom-access: no

supports-register-dump: yes

supports-priv-flags: no

root@router:/# ethtool -S lan0

NIC statistics:

     tx_packets: 5910836

     tx_bytes: 23790296955

     rx_packets: 5107728

     rx_bytes: 308451005

     TxDrop: 0

     TxCrcErr: 0

     TxUnicast: 17412457

     TxMulticast: 13

     TxBroadcast: 1469

     TxCollision: 0

     TxSingleCollision: 0

     TxMultipleCollision: 0

     TxDeferred: 0

     TxLateCollision: 0

     TxExcessiveCollistion: 0

     TxPause: 0

     TxPktSz64: 8845

     TxPktSz65To127: 87698

     TxPktSz128To255: 72002

     TxPktSz256To511: 155790

     TxPktSz512To1023: 358812

     Tx1024ToMax: 16730792

     TxBytes: 24453850212

     RxDrop: 0

     RxFiltering: 0

     RxMulticast: 1300

     RxBroadcast: 3198

     RxAlignErr: 0

     RxCrcErr: 0

     RxUnderSizeErr: 0

     RxFragErr: 0

     RxOverSzErr: 0

     RxJabberErr: 0

     RxPause: 0

     RxPktSz64: 283638

     RxPktSz65To127: 4775499

     RxPktSz128To255: 17308

     RxPktSz256To511: 9889

     RxPktSz512To1023: 7212

     RxPktSz1024ToMax: 14183

     RxBytes: 400390173

     RxCtrlDrop: 0

     RxIngressDrop: 0

     RxArlDrop: 0

This looks strange to me…i guess you have more traffic on lan-ports to soc as eth0 can handle.

Eth0 is gbe with 2gbit/s in trgmii-mode (in my kernel)…with lanports you can get 4x1gbit/s

Sharp eye.

Are you testing with flow control off? Or does deb/ubuntu set it off by default? These tests were done with a laptop directly cabled up. So the only traffic on the LAN at the time was my netcat tests and samba file transfers between the two hosts

I can see TCP scaling the window up and down in the wireshark dumps. My experience with ethernet layer flow control is limited - usually the right answer has been turn it off and let the higher layers deal with it. When I get some time after work I’ll try disabling flow control on both machines and see what happens.

I have not tested,but flowcontrol packets are send if receiver gets too much traffic to inform the sender to buffer on its side…I think disabling flow control will not fix this…only make less traffic lanx => cpu. But of course you can try it

Hmm well I can’t play with turning it off flow control. ethtool says operation not supported. Although the counter continues to increase I can’t see the pause frames in tcpdump output. Maybe they are not forwarded to the hosts buffer?

I am capturing on the DSA interface lan0 not eth0 as - captures from eth0 seem invalid.

Its also strange that its on the rx side - as that would imply the remote device is sending the frames. Those are much much faster x86 machines with PCIE NICs or in the case of the laptop thunderbolt. I can do my same experiment with netcat between them and get nearly double the thruput and I don’t see all the dup acks and retransmissions. Considering I am generating the traffic on the R2 either with netcat, nfs, or samba I really don’t know how it could be saturating the links. Certainly these hosts can take packets as fast as the R2 can generate them.

Thanks for all the input… Any other ideas or kernel tuneables I might be overlooking?

root@router:~# ethtool lan0
Settings for lan0:
Supported ports: [ TP AUI BNC MII FIBRE ]
Supported link modes:   10baseT/Half 10baseT/Full 
                        100baseT/Half 100baseT/Full 
                        1000baseT/Full 
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Advertised link modes:  10baseT/Half 10baseT/Full 
                        100baseT/Half 100baseT/Full 
                        1000baseT/Full 
Advertised pause frame use: Symmetric Receive-only
Advertised auto-negotiation: Yes
Link partner advertised link modes:  10baseT/Half 10baseT/Full 
                                     100baseT/Half 100baseT/Full 
                                     1000baseT/Half 1000baseT/Full 
Link partner advertised pause frame use: Symmetric Receive-only
Link partner advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: MII
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Link detected: yes

Can someone post the output of ethtool with no arguments on one of the DSA interfaces and perhaps eth0? Frank-w with you kernel should the Port be: trgMII rather than MII or is this just how ethtool reports?

Trgmii is set via dts on cpu-ports

Yes and that’s certainly what is in the mt7623n-bananapi-bpi-r2.dts file. But should the value for Port reflect that in ethtool output?

I don’t know where ethtool gets this info and if it knows trgmii…imho its a vendor-specific mode

Alright a little more info here for those following.

As a an experiment I removed “pause;” from the eth0 stanza in the dts. Based on the output of ethtool that seems to disabled flow control on the interface. This improves performance somewhat but I am still seeing less that 50% of the TCP thruput I would expect from TCP on GigE.

As an experiment to see if the duplicate ack and retransmission rate would be lower at a lower speed I plugged a fastethernet usb adapter into my laptop and tried that. The result as an almost unusable link connected to lan0. Although dmesg indicates the lan0 link has come up and negotiated full duplex - it spends all of its time doing retransmissions.

I went ahead and compiled the kernel modules to support the usb adapter and plugged that into the R2. I moved the interface configuration to it. Looking at the netcat test in wireshark it behaves much better. Starts slow the window increases until a single block of dup akcs is hit and than it backs off some and runs stable. Very clean clean after that with no loss. That sure seems to indicate something is not right with the mt ethernet or dsa layer.

I might go back to one of the 4.4 kernels without dsa and see what happens. I also need to verify these problems exist on the eth1/wan side before I do though.

I’d be curious if someone can set a partner device or switch port to 100/FD and see if they don’t encounter major problems though. I feel like I have pretty well isolated this down to either the hardware or the kernel/driver. So I’ll be a little surprised if I am alone here; but I certainly could be missing something.

I’ve got a similar issue, the BPI’s upload is slow I get max 150Mbps unbstale(jumping from 5 - 100 mbps) on Iperf3 while download is fine I can get up to 900Mbps. I’ve tried changing cables and tested iperf on multiple machines. Seems like this is only on Frank’s 4.14 Ubuntu 18 image. I’m considering switching back to 4.4 since there are no performance loses.

Could you try a higher kernel (4.14.53+)?

also you can try my debian-image or official with kernel 4.14

please check this patch was applied to your code base. we need to enlarge gmac2 driving on R2 board to fix crc issue, thanks.

https://github.com/BPI-SINOVOIP/BPI-R2-bsp/commit/89982c49b6266d4c8bd1311f6f14c4ebbd0567bb

I looked at the code in franks repo and compared it with that patch. It appears that patch has been applied.

Now the value

    /* Set GE2 driving and slew rate */
    regmap_write(eth->pctl, GPIO_DRV_SEL10, 0x600);

I changed it to 0x600 to match what is in the patch here. It seems to make things quite a bit worse.

We need to apply all below settings. I’ve tested and confirm It can fix CRC error issue in kernel 4.4 environment, thanks.

    /* Set GE2 driving and slew rate */
    regmap_write(eth->pctl, GPIO_DRV_SEL10, 0x600);
     /* Set GE2 TDSEL */
    regmap_write(eth->pctl, GPIO_OD33_CTRL8, 0x5);
     /* Set GE2 TUNE */
    regmap_write(eth->pctl, GPIO_BIAS_CTRL, 0x0);
mtk_eth_soc.c:2010:	/* Set GE2 driving and slew rate */
mtk_eth_soc.c-2011-	regmap_write(eth->pctl, GPIO_DRV_SEL10, 0xa00);
mtk_eth_soc.c-2012-
mtk_eth_soc.c-2013-	/* set GE2 TDSEL */
mtk_eth_soc.c-2014-	regmap_write(eth->pctl, GPIO_OD33_CTRL8, 0x5);
mtk_eth_soc.c-2015-
mtk_eth_soc.c-2016-	/* set GE2 TUNE */
mtk_eth_soc.c-2017-	regmap_write(eth->pctl, GPIO_BIAS_CTRL, 0x0);
mtk_eth_soc.c-2018-

is what is in mtk_eth_soc.c from Frank-W’s 4.14 main branch. The other files in the patch contained the code from the patch so I believe it was already applied or otherwise merged at some point and than changed from there. I tried to change the value from 0xa00 to 0x600 from your patch and did a quick test. It seems to have negatively impacted throughput for me.

Can you expand on the “CRC” error? I am not seeing checksum problems in my packet captures, rather missing segments so maybe we are fighting a different issue? Where I do see checksum issues are if I try to capture directly on the eth0 or eth1 interfaces rather than the lan[0-3] or wan interface. I am not sure if that isn’t expected behavior though.