I was performing different tests with the machine and didn’t notice how slow it was on the gigabit ethernet interfaces.
When sending a file from one R2 to another R2 using a 1 feet cable (point to point), the maximum transference speed I can obtain is 14MB/s … to isolate factors, I am reading and writing on RAM disks in both sides. Also, previously I connected both machines to my gigabit infrastructure, but “just in case”, I connected them directly.
This is Fast ethernet speed, not Gigabit speed.
As an extra reference, one machine is connected on the WAN port and the other in one of the LAN ports.
A bare perl script that reads 100000 times a fixed 10240 size string from memory and send it on the wire to another side where receiver is expecting exactly what has been send. 7 and a half seconds ~ around 135 MB/s.
So … yes, ethernet it is working at gigabit speed. Bottleneck is located in another place.
Checking CPU … sftp makes one side to near 100% many times, and both sides between 40 and 60% (I suppose when protocol is synchronizing results). So, yes, the problem for throughput with SSH is the CPU. Then, the machine can provide very high networking throughput but only if the processes are not using a lot of CPU.
There is something really wrong with ethernet speed of R2. When I copy large file from my desktop PC via gigabit ethernet to the NFS share on R2 it runs approx. some 60MB/s which is the limit of the destination 2.5" HDD connected to R2. So far so good. But when I do copy large file in the opposite direction from R2 to desktop PC, the speed is approx. 10 times lower, some 6-7MB/s. That’s really awful… There is no obvious reason for that. Nor the CPU. There is no visible CPU stress when it sends data from R2 so slowly.
Does anybody already have any idea on what could be wrong and how to sort it out? Interestingly only one direction of ethernet data flow is affected. I might be a good point to start the investigation here…
I’m running the latest Ubuntu image on R2. But it used to be the same with the previous one too.
Yeah, you are right, but that’s not the case. Network speed is slow only when copying from 2.5" HDD in R2 to SSD in desktop PC. Write speed of the SSD is not the bottleneck for sure. BTW everything (ie. desktop PC and netwotk setup) is the same as earlier when I has my home server built on Intel J1900 motherboard and the same 2.5" disk. Network transfer rates were as one would expect from magnetic disk and gigabit network in both directions.
Can’t complain about the ethernet speed. Using a Samba share on the R2 I see a writing speed of about 85 MB/sec having a rather outdated HDD. That’s quite okay for me
Thank you for sharing your experience. And what about reading speed, ie. when you send data from R2 to other device in your network? My writing speed is OK too, only reading and sending data from R2 is affected.
Hi, I’ve encountered a similar problem as sunarowicz, the gigabit ethernet speed when transfering data to the Pi is fine, 900-ish Mbps, but speeds in the other direction are under 1Mbps…
I’m using Frank W.'s debian image with 4.14 kernel. Downgrading to 4.9 solved the problem, but 4.9 doesn’t have GPIO support and I’m also unable to create more than one routing table.
Any idea what might be causing this? It isn’t a duplex mismatch problem, nor is it anything other than the network, I used iperf to test the speed. If you want I can provide screenshots of the tests, but they won’t tell you anything more than I have already.
If I remember correctly, it was 4.14.62, the precompiled version from august 2018. I’ve been running 4.9 for two months now. The strange thing is that if I set the NIC to 100Mbps (or used a 100Mbps switch), it worked fine both ways, the problem was only with gigabit.
maybe it’s internally (wrong) handled as 100Mbit/s??
looking at top shows that ssh-process is running with 100% cpu (1core used)…so the Problem is the encryption. maybe this can be increased with hardware-encryption-acceleration, but i don’t know how set this up
traffic through http is slower (~7MB/s) and much more cpu-intensive (200%)
currently have no idea why this happens.
gigabit should reach ~115 MByte/s if not limited by CPU
openssh uses openssl library to implament ssh command and openssl linrary use cryptodev to enable hw accererator feature. maybe you can refer to below thread and use openssl command to check performance first, thanks.