>You might want to consider using a 1U or specialized chassis for your
>64-node PC cluster. A PC chassis might be okay, if you don't have the
>budget. If you do decide to use a 1U or a specialized chassis, you
>might want to take a look at ATXBlade from Rackmount.com. It is a
>blade like chassis. You will find more information at following URL:
>> Need some recommendations on building a 64-node cluster for my school's
>> Biometrics lab. The funding for the project is limited (<$100K). I am
>> thinking about utilizing commercially available off-the-self motherboards,
>> processor, memory and PC chassis for the project. The cluster will be
>> running under Linux O.S. with Linux-based MPI applications. Any suggestions
>> on processors, motherboards and chassis will be greatly appreciated. Thanks!
Give some thought to amount of shelving you need, which will determine
floor space. Then think about how you are going to do the the power
wiring. You need at least 6 20-AMP circuits for this puppy. Put it in
a room with enough A/C to keep it cool.
You can get cheap grey steel shelving and space the shelves so that
you can use any mini-tower you can get your hands on, or you can use
standard computer racks and shelves with either 1U/2U rack cases or
use micro-atx desktop cases like this and put them on edge of rack
shelves, 4 per shelf (stacking computer boxes makes it really hard to
work on anything but the top box, IMO.)
By my calculation you could get 20 of these cases in one 7ft-high
rack; 19inch Rack+5 shelves+rack power strips ; about $600US. You
need 4 of these. About 24 square ft of floorspace.
With Micro-ITX route the case/PSU is $50US and there are a bunch of
microatx mobos for about $50. Look at newegg for them.
The cheapest 1U case I've seen, lately:
You can get 42 of these in a rack, and you don't need shelves. I'm
told cooling can be a problem if you max-out the rack. I'd build one
rack-full (42) and operate it for a while before I decided if I needed
3, 3, or 4 racks to hold the whole cluster.
If you don't get the power amd mechanical stuff right you'll have an
unreliable cluster that's a pain to work on.
There's an O'Rielly book on Beowolf cluster that I'm sure you already
I know that switch latency is a big deal with high performance clusters
and that low-latency switches are expensive.
It's possible that inter-box data rate is the bottleneck for your
cluster and that some on-mobo GbE interfaces are much faster than
others, or cheap PCI cards, because they have a direct memory
interface and don't use the PCI bus, which can't fill a GbE
pipe. Newegg lists 33 mobos (AMD) that have on-mobo GbE, from $70US to
adykes at p a n i x . c o m