Performance of IP forwarding over both GMACs(eth0 and eth1)

Below is the performance of IP forwarding over GMACs(eth0 and eth1):

  1. Single direction flow

  2. Bidirection flow

Statistics on R2 board: ------------------ root@LEDE:/proc/irq/70# ifconfig eth0 Link encap:Ethernet HWaddr EE:86:59:C8:EB:3A
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::ec86:59ff:fec8:eb3a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:26822854 errors:0 dropped:0 overruns:0 frame:0 TX packets:37617944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:35021520367 (32.6 GiB) TX bytes:49849909918 (46.4 GiB) Interrupt:68

eth1      Link encap:Ethernet  HWaddr 8E:3B:FE:FC:EA:BE  
          inet addr:192.168.10.1  Bcast:192.168.10.255  Mask:255.255.255.0
          inet6 addr: fe80::8c3b:feff:fefc:eabe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:37621811 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26821988 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:49699772484 (46.2 GiB)  TX bytes:35127932237 (32.7 GiB)
          Interrupt:68

user do some test for 2.4G & 5.8G,for reference only

5.8G test:

8G wireless test

2.4G test:

4G wireless test

Impressive! Does it use HW NAT in tests?

Just perfect for GPON :wink:

Best regards, Karol

A simple idea to check if HWNAT is used:

cat /proc/interrupts

And check if network interrupts is rapidly increased during the test.

Besides, could you show a full result of ifconfig -a and swconfig dev {SwitchName} show if possible? Just curious about it.

Please, show us also CPUs usage during tests.

Don’t use HW NAT, more log is attached.a.log (6.4 KB)

When test the performance, it will take 20% usage of CPU. And if two ethernet interrupts are assigned to different cpu(1 and 2), will get better performance than both are on cpu0

Ok, very nice.

I’m curious how performance (throughput and CPUs usage) would be affected by using HW NAT.

Are You able to check it?

Could You also make test with 1518B, 512B and 64B packet size? Then performance could be compared to other platforms.

Hi Karol The HW NAT isn’t supported yet, we need a plan for it, when it’s avaiable, I will show you the test result.

Ok, thanks!

Could You make test for 1518B, 512B and 64B packet size without HW NAT (as is now)?

How does the hardware NAT look like it’s gonna be? And when? Any image that doesn’t come with X?

We are trying to support HNAT based on LEDE, will let you know when it’s available.

do you include it to kernel on github if its working?

@garywang @Super_De_Algar @Karol_Bizewski

Below is the performance of IP forwarding based on Hardware NAT:

The source code is coming soon, but I don’t have time to test brenchmark for 64byte, 512 byte, and 1024 byte recently, and looks like the test result is not good as we expected,

After changing the test PC-s, the performance is closed to line rate(1Gbps) in bidirection

see more information in below link:

What is that exe thing? DOS or something? That looks like out of another century!

you can refer to below link to know what’s the iperf https://documentation.meraki.com/zGeneral_Administration/Tools_and_Troubleshooting/Troubleshooting_Client_Speed_using_iPerf

The iperf…exe is the tool iperf which is an executable file under DOS. I will share it for you if you need.

Well it’s not a DOS program - it’s a Windows console program. Performance looks very promising!