Bonding Network Interface Performance

Bonding Network Interface Performance

Post by Zoran Cvetkovi » Tue, 13 Jan 2004 22:11:06



Hallo,

I did some testing with the bonding network interface.

I enslaved two gigabit interfaces and connected it to
two trunked ports on a gigabit switch.

I did some speed tests with several client systems
connected to the same switch using iperf towards a
server using the bonded chanel connection.

Unfortunately I was not able note any speed increase
compared to a single gbit connection.

I note from ifconfig, that both ensalved interfaces
are in use and are reciveing and sending data.

So is someone else here using the bonding device sucessfully with a
cleare performance increase?

Best Regards
Zoran Cvetkovic

 
 
 

Bonding Network Interface Performance

Post by Juha Laih » Wed, 14 Jan 2004 03:57:00



Quote:>I did some testing with the bonding network interface.

Something I've been planning to do, but haven't had time, so I'm
interested in seeing any results.

Quote:>I enslaved two gigabit interfaces and connected it to two trunked ports
>on a gigabit switch.
...
>Unfortunately I was not able note any speed increase compared to a
>single gbit connection.

Ok - but: on which kind of machine did you try this; especially, to what
kind of bus are these two adapters connected, and how is the path from
that connetion place to the CPU? F.ex., if you're having a machine with
regular PCI bus (32-bit, 33MHz), you'll have theoretial bus bandwidth of
4*33M=132Mbytes/s - whereas 1Gbit/s=125Mbytes/s, or so. So, it could be
the peripheral bus of your machine that is being the bottleneck. Also,
how was the CPU utilization of the machines during the test?

What MTU size are you using -- regular 1.5kB or "jumbo frames"; if regular,
switching to jumbo frames might help (if the bottleneck is neither the bus
nor the CPU power).
--
Wolf  a.k.a.  Juha Laiho     Espoo, Finland

         PS(+) PE Y+ PGP(+) t- 5 !X R !tv b+ !DI D G e+ h---- r+++ y++++
"...cancel my subscription to the resurrection!" (Jim Morrison)

 
 
 

Bonding Network Interface Performance

Post by Zoran Cvetkovi » Thu, 15 Jan 2004 18:47:25



> Ok - but: on which kind of machine did you try this; especially, to what
> kind of bus are these two adapters connected, and how is the path from
> that connetion place to the CPU?

I do use PCI-X at 133Mhz, 64bit.

  F.ex., if you're having a machine with

Quote:> regular PCI bus (32-bit, 33MHz), you'll have theoretial bus bandwidth of
> 4*33M=132Mbytes/s - whereas 1Gbit/s=125Mbytes/s, or so. So, it could be
> the peripheral bus of your machine that is being the bottleneck. Also,
> how was the CPU utilization of the machines during the test?

All Server are Dual Xeon 2.8 Ghz the load was not significant.

Quote:

> What MTU size are you using -- regular 1.5kB or "jumbo frames"; if regular,
> switching to jumbo frames might help (if the bottleneck is neither the bus
> nor the CPU power).

Standard MTU size was used.

Best regards
Zoran.Cvetkovic

 
 
 

Bonding Network Interface Performance

Post by Juha Laih » Sat, 17 Jan 2004 05:17:00




>> Ok - but: on which kind of machine did you try this; especially, to what
>> kind of bus are these two adapters connected, and how is the path from
>> that connetion place to the CPU?

>I do use PCI-X at 133Mhz, 64bit.
...
>All Server are Dual Xeon 2.8 Ghz the load was not significant.

Hmm.. that looks like it should be enough.

Quote:>> What MTU size are you using -- regular 1.5kB or "jumbo frames"; if regular,
>> switching to jumbo frames might help (if the bottleneck is neither the bus
>> nor the CPU power).

>Standard MTU size was used.

Is it possible for you to try using the jumbo frames? Though I believe
that you should be getting better bandwidth with your setup.

Hope someone with more experience on bonding happens to notice this thread.
--
Wolf  a.k.a.  Juha Laiho     Espoo, Finland

         PS(+) PE Y+ PGP(+) t- 5 !X R !tv b+ !DI D G e+ h---- r+++ y++++
"...cancel my subscription to the resurrection!" (Jim Morrison)