BPI-R2 slow ethernet speed

I was performing different tests with the machine and didn’t notice how slow it was on the gigabit ethernet interfaces.

When sending a file from one R2 to another R2 using a 1 feet cable (point to point), the maximum transference speed I can obtain is 14MB/s … to isolate factors, I am reading and writing on RAM disks in both sides. Also, previously I connected both machines to my gigabit infrastructure, but “just in case”, I connected them directly.

This is Fast ethernet speed, not Gigabit speed.

As an extra reference, one machine is connected on the WAN port and the other in one of the LAN ports.

The test was made with sftp. I will make more tests later to confirm.

sftp may be limited by cpu…please make tests without encryption or look also on cpu

OK … let’s see …

netcat … almost no CPU but half speed than sftp.

A bare perl script that reads 100000 times a fixed 10240 size string from memory and send it on the wire to another side where receiver is expecting exactly what has been send. 7 and a half seconds ~ around 135 MB/s.

So … yes, ethernet it is working at gigabit speed. Bottleneck is located in another place.

Checking CPU … sftp makes one side to near 100% many times, and both sides between 40 and 60% (I suppose when protocol is synchronizing results). So, yes, the problem for throughput with SSH is the CPU. Then, the machine can provide very high networking throughput but only if the processes are not using a lot of CPU.

There is something really wrong with ethernet speed of R2. When I copy large file from my desktop PC via gigabit ethernet to the NFS share on R2 it runs approx. some 60MB/s which is the limit of the destination 2.5" HDD connected to R2. So far so good. But when I do copy large file in the opposite direction from R2 to desktop PC, the speed is approx. 10 times lower, some 6-7MB/s. That’s really awful…:frowning: There is no obvious reason for that. Nor the CPU. There is no visible CPU stress when it sends data from R2 so slowly.

Does anybody already have any idea on what could be wrong and how to sort it out? Interestingly only one direction of ethernet data flow is affected. I might be a good point to start the investigation here…

I’m running the latest Ubuntu image on R2. But it used to be the same with the previous one too.

mostly write is slower than read,so check please your target system

Yeah, you are right, but that’s not the case. Network speed is slow only when copying from 2.5" HDD in R2 to SSD in desktop PC. Write speed of the SSD is not the bottleneck for sure. BTW everything (ie. desktop PC and netwotk setup) is the same as earlier when I has my home server built on Intel J1900 motherboard and the same 2.5" disk. Network transfer rates were as one would expect from magnetic disk and gigabit network in both directions.

You do not observe the same issue at your R2?

Have not done any speed-tests yet…many time needed for kernel-work

OK Frank, I know that. Thank you for all your work and involvement in making R2 more useable!

Can’t complain about the ethernet speed. Using a Samba share on the R2 I see a writing speed of about 85 MB/sec having a rather outdated HDD. That’s quite okay for me :slight_smile:

Thank you for sharing your experience. And what about reading speed, ie. when you send data from R2 to other device in your network? My writing speed is OK too, only reading and sending data from R2 is affected.

I did:

mount -t cifs -o username=linke,password=******* //supernane/Data /mnt

rsync /mnt/BPI-Images/2017-09-13-debian-9-stretch-mate-desktop-preview-bpi-r2-sd-emmc.img.zip . --progress

1,072,890,047 100% 84.99MB/s 0:00:12 (xfr#1, to-chk=0/1)

Using scp I get only 18 MB/sec. Well it is an ARM - not an i7. The target system is an Intel-Atom powered Zotac Zbox … not THAT fast too :wink:

Hi, I’ve encountered a similar problem as sunarowicz, the gigabit ethernet speed when transfering data to the Pi is fine, 900-ish Mbps, but speeds in the other direction are under 1Mbps…

I’m using Frank W.'s debian image with 4.14 kernel. Downgrading to 4.9 solved the problem, but 4.9 doesn’t have GPIO support and I’m also unable to create more than one routing table.

Any idea what might be causing this? It isn’t a duplex mismatch problem, nor is it anything other than the network, I used iperf to test the speed. If you want I can provide screenshots of the tests, but they won’t tell you anything more than I have already.

Which kernel do you use (uname -r) exactly?

If I remember correctly, it was 4.14.62, the precompiled version from august 2018. I’ve been running 4.9 for two months now. The strange thing is that if I set the NIC to 100Mbps (or used a 100Mbps switch), it worked fine both ways, the problem was only with gigabit.

Have you tried a version of 4.14 prior to 4.14.52?

4.9 is not affected? You use my 4.9? How about 4.19?

with ssh/scp i got ~13MB/s if i download a file from PI…same in opposite direction…looks more like 100Mbit/s…

cat /sys/class/net/lan0/speed
cat /sys/class/net/eth0/speed                                                                                    

maybe it’s internally (wrong) handled as 100Mbit/s??

looking at top shows that ssh-process is running with 100% cpu (1core used)…so the Problem is the encryption. maybe this can be increased with hardware-encryption-acceleration, but i don’t know how set this up

traffic through http is slower (~7MB/s) and much more cpu-intensive (200%)

currently have no idea why this happens.

gigabit should reach ~115 MByte/s if not limited by CPU

@Ryder.Lee / @moore / @linkerosa / @Jackzeng can you test it with your boards?

same with my test-board and kernel 4.19.1 (without gmac#2)…both directions 12-13 MB/s (ssh with 100% cpu), but ftp is 9MB/s get and 20MB/s put

openssh uses openssl library to implament ssh command and openssl linrary use cryptodev to enable hw accererator feature. maybe you can refer to below thread and use openssl command to check performance first, thanks.


ssh/scp is fastest…

Without encyption it’s the same problem…

i tested for dropped/corrupt packets, after transferring, but there no relevant error-count

[09:48:59]$ scp /media/data_ntfs/backup/system_backup_20181023.tar.gz [email protected]:/tmp
[email protected]'s password: 
system_backup_20181023.tar.gz                                                                 100%  395MB  13.5MB/s   00:29    
[09:49:44]$ scp [email protected]:/tmp/system_backup_20181023.tar.gz /tmp
[email protected]'s password: 
system_backup_20181023.tar.gz                                                                 100%  395MB  12.6MB/s   00:31 

eth_stats.txt (1,3 KB)

Please note, that scp copying speed may depend on emmc/SD/SSD/etc speed. Not bad way to test network speed is to use iperf - no disk IO, just network.

Here is my wifi example

Server listening on 5201
Accepted connection from, port 54066
[  5] local port 5201 connected to port 54067
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  20.0 MBytes   168 Mbits/sec
[  5]   1.00-2.00   sec  23.6 MBytes   198 Mbits/sec
[  5]   2.00-3.00   sec  22.7 MBytes   190 Mbits/sec
[  5]   3.00-4.00   sec  23.0 MBytes   193 Mbits/sec
[  5]   4.00-5.00   sec  22.5 MBytes   189 Mbits/sec
[  5]   5.00-6.00   sec  22.1 MBytes   186 Mbits/sec
[  5]   6.00-7.00   sec  22.1 MBytes   186 Mbits/sec
[  5]   7.00-8.00   sec  22.2 MBytes   186 Mbits/sec
[  5]   8.00-9.00   sec  21.0 MBytes   176 Mbits/sec
[  5]   9.00-10.00  sec  20.8 MBytes   175 Mbits/sec
[  5]  10.00-10.04  sec   838 KBytes   164 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.04  sec   221 MBytes   184 Mbits/sec                  receiver
Server listening on 5201

scp on same setup gives only ~5MiB/s which is about 50Mbit/s

P.S. Note that i use wifi