Clustering - Network Speed vs Local Bus Speed

Clustering - Network Speed vs Local Bus Speed

Post by Diver_8 » Mon, 29 Jul 2002 08:02:11



hiya - i've read that most of the people & organizations doing serious
clustering with linux are using gigabit ethernet on the backplane. how does
gigabit, speed-wise, compare with the speed of the local bus. just for the
sake of argument, we're talking about your average x86 box, maybe with a
133-200mhz bus speed. i'm sure it's slower than the local bus, but wondering
how much slower & how much of a bottleneck it creates? take it a step
further, how would 100mbps ethernet compare with the local bus? would it be
proportionately 10x slower than gigabit, or are there any other factors?
 
 
 

Clustering - Network Speed vs Local Bus Speed

Post by Choprbo » Wed, 31 Jul 2002 07:09:31



> hiya - i've read that most of the people & organizations doing serious
> clustering with linux are using gigabit ethernet on the backplane. how does
> gigabit, speed-wise, compare with the speed of the local bus. just for the
> sake of argument, we're talking about your average x86 box, maybe with a
> 133-200mhz bus speed. i'm sure it's slower than the local bus, but wondering
> how much slower & how much of a bottleneck it creates? take it a step
> further, how would 100mbps ethernet compare with the local bus? would it be
> proportionately 10x slower than gigabit, or are there any other factors?

First of all, which "local bus" are you talking about? The 133-200MHz speeds you talk
about would be the CPU<->chipset<->memory variety. Your talking 12.8Gbs on a 64bit
bus! But these are very short busses and are incapable of being linked machine to
machine (based on signal propogation and coordination problems that were never
invisioned for this type of app). Moving out to 32bit 33MHz PCI your still talking a
Gbs of theoretical bandwidth (which would take custom-made hardware and drivers to
extend between machines).

But the bottleneck of clusters is not typically the speed of the ethernet link, but
the latency of the cards. Sure you can transmit a 100Mbs easily over CAT5 and your
average message between processors is less than 1K, but the drivers/network
stack/network card processor is going to have say 50msec of latency between the
intention to transmit and the data actually on the wire. That's nothing for surfing
the web, but it's an eternity in CPU time. A collision on the wire delays it even
further. Take an application that needs to coordinate 1000s of times per second
between processes and you are quickly over-run by latency. Gig ethernet is a bit less
latency (due to faster card processors and more intelligent packet handling), but it
to can be a big bottleneck depending on your cluster application.

If the cluster application doesn't need lots of inter-process communication, ethernet
is fine. For clusters that need high-speed, very low latency connections, a solution
such Myrinet or Dolphin are employed. These are extensions of the PCI busses that act
like inter-node busses, but it's expensive.

Adrian

-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----

 
 
 

Clustering - Network Speed vs Local Bus Speed

Post by Bernd Huebenet » Thu, 01 Aug 2002 14:41:40



> hiya - i've read that most of the people & organizations doing serious
> clustering with linux are using gigabit ethernet on the backplane. how does
> gigabit, speed-wise, compare with the speed of the local bus. just for the
> sake of argument, we're talking about your average x86 box, maybe with a
> 133-200mhz bus speed. i'm sure it's slower than the local bus, but wondering
> how much slower & how much of a bottleneck it creates? take it a step
> further, how would 100mbps ethernet compare with the local bus? would it be
> proportionately 10x slower than gigabit, or are there any other factors?

Hi,

in a common PC with a 32 Bit and 33 Mhz Clockrate PCI Bus the maximum
transfer rate will be 133 MByte/Sec on the PCI Bus. In theory the
maximum transfer rate of a gigabit ethernet adapter will be 125
Mbyte/Sec. In pratice you won't reach this values but nevertheless you
will not be able to bring the Gigabit Adapter to it's limit because
there other adapters (graphics, I/O) that share the PCI Bus at the same
time. To increase the maximum transfer rate on the PCI Bus you need
systems that can come with 64 Bit PCI Slots and/or multiple PCI Buses.

Bye,
Bernd