> I have been given the task of upgrading our 6 engineering workstations
> to gigabit ethernet. We currently have a HP4000m chassis with 8
> 8-port 10\100 modules. I would like to replace this with a 4108gl
> (includes 72 10\100 ports) and two of the 6-port 100\1000TX modules.
> I did some searching around and found some information about a 2GB
> blocking setup that the 4108gl has, but the 5300 series doesn't. How
> will this effect me? I'm not an expert when it comes to switching and
> layers and all this stuff so I'm looking for some outside opinions on
> which to go with. Is setup similar between the two? Are there any
> other pros and cons I should be aware of? I looks like the 5300 only
> has 4-port 100\1000 modules, but that shouldn't be a big deal.
> We have 3 new servers all with dual 100\1000 NIC's so they should be
> able to push the data out fast enough. The engineering workstations
> are big HP workstations with fast SCSI disks, etc. The engineering
> dept mainly uses Solidworks with 1000's of files in their assemblies.
Based on my own experience in a similar scenario.
CAD is one of the few applications that can really and truly push a layer 2
switch to it's maximum. I supported a CAD network with CAD files moving
around of 150MB or more, now that was fine once in a while but this was
constant. It strained the OC3 nic on the server so hard it hit an SNMP
reported 100% routinely (there was no gigabit solution then) edge
100BASE-TX hubs were useless as the collisions went through the roof,
replacing them with 3Com switches solved the problem (well not the server
issue although we did sort of "team" 2 OC3 cards successfully before a
Novell service pack killed all support for LANE) Our core switch and the
edge switches were non-blocking and we were left with the crappy server
connection, I could at least be happy that the network architecture was not
Now with servers with GigE connections, large CAD files off a server will
need a backplane on the switch that does not create a bottleneck.
So 6 w/s at 1Gbps each needs at least a 6Gbps backplane assuming large CAD
files or many smaller ones being loaded simultaneously (if the workstations
are multithreading and could be sending and receiving files almost
simultaneously then you could double this to be safe so 12Gbps) plus add at
least 1Gbps for the server connections plus the other non engineering
workstations and you get an idea of the worst case scenario.
To test this, study the existing traffic pattern looking in depth at
bandwidth utilisation for engineering workstations and servers and get a
feel of the normal non-graphics heavy traffic. Do some sniffing and find the
maximum and average packet/frame size for each and the distribution of the
same. Do the maths and remember a really fast PC given a gigE link will try
and work at wirespeed and you should get a feel for whether the switch
should be non-blocking or not.
Seriously consider putting the high end engineering w/s and their main
server on their own cosy little switch if possible (you may find you don't
have to swap your chassis based switch as all your problems go away!!)
Oh and plan for the future, you want headroom in the bandwidth department as
you never know what Solidworks/engineers may do in a years time.
Now as opposed to 1998 there should really be no reason why a non-blocking
solution is not both affordable and viable, if a chassis based switch cannot
cut the mustard use fixed/stackables instead and a good redundant PSU system
to provide the same level of resilience (it is usually as easy to swap a
stackable switch as a blade in a chassis and a damned sight easier and
cheaper to have a hot spare)