But the bottleneck of clusters is not typically the speed of the ethernet link, but
the latency of the cards. Sure you can transmit a 100Mbs easily over CAT5 and your
average message between processors is less than 1K, but the drivers/network
stack/network card processor is going to have say 50msec of latency between the
intention to transmit and the data actually on the wire. That's nothing for surfing
the web, but it's an eternity in CPU time. A collision on the wire delays it even
further. Take an application that needs to coordinate 1000s of times per second
between processes and you are quickly over-run by latency. Gig ethernet is a bit less
latency (due to faster card processors and more intelligent packet handling), but it
to can be a big bottleneck depending on your cluster application.
If the cluster application doesn't need lots of inter-process communication, ethernet
is fine. For clusters that need high-speed, very low latency connections, a solution
such Myrinet or Dolphin are employed. These are extensions of the PCI busses that act
like inter-node busses, but it's expensive.
-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----
in a common PC with a 32 Bit and 33 Mhz Clockrate PCI Bus the maximum
transfer rate will be 133 MByte/Sec on the PCI Bus. In theory the
maximum transfer rate of a gigabit ethernet adapter will be 125
Mbyte/Sec. In pratice you won't reach this values but nevertheless you
will not be able to bring the Gigabit Adapter to it's limit because
there other adapters (graphics, I/O) that share the PCI Bus at the same
time. To increase the maximum transfer rate on the PCI Bus you need
systems that can come with 64 Bit PCI Slots and/or multiple PCI Buses.
What are the ramifications of getting say a 266 MHz processor on a Asus
with a 100 MHz bus and running it with PC100 memory.
Anyone know where I might find a write up on this?