OK, since you asked for similar experiences, I will share some as
well as some comments below. In summary, I will say that we
mix and match full duplex FE with half duplex 10mbps all over
the place and have had no problems. We also mix half and
full duplex FE, again with no problems. The biggest problems
we have had though are duplexity mismatches between the switch
and the end node, usually a server. In the scenario below, you
seem to assume that your switch is correctly auto negotiating
the correct duplexity. In fact the symptoms you give of high
collision and CRC's indicate a duplexity mismatch. You really
should have little to no CRC errors on any connection.
Charlie Fenwick wrote:
> I just got finished troubleshooting a very similar situation. We had one
> single vlan that was supporting a certain application (Roughly 50 PCs).
> Let's call the machines we were troubleshooting A, B and C. When I
> originally installed the two switches to support this application, I left
> the switch in auto speed/duplex negotiation (I wasn't sure how the
> application vendor would install the PCs or what kind of NICs they had).
> I got a call that certain parts of the application were very slow and that
> key hosts running in this application were reporting errors, lets call them
> hosts A, B and C.
> A would transmit a large amount of data to B and C. When I first
> investigated they had A set to half and B and C set to full. Many a bad
> CRC, single and multiple collisions were seen for A. The application would
> eventually balk and say that A couldn't write to network drive space on B
> and C. That A had suffered disk buffer over runs and that data may have
> been lost (this was a image archiving application where pictures are
> snapped at A temp stored in a buffer and then shipped to B and C for
> different types of processing).
This tells me that the switch is not matched correctly with A or with
B and C. You should not be seeing any CRC's.
> The vendor's engineer flip-flopped the duplex configuration... B and C now
> half, and A was full. The application would eventually balk again saying
> that B and C were not responding. When I plugged in a sniffer in I saw a
> large number of retransmissions to, and late ACKs coming from, B and C.
> ACKs would be like .1, .3 or .5 ms on the average and then would skyrocket
> to 197ms or higher.
> The problem would not be so bad when all three devices were in the same
> My theory was this (and we have taken as the explanation)...
> Full duplex takes CS and CD out of Ethernet, because there are logically
> only 2 devices on a bridged segment. Meaning in the first situation, B and
> C were sending ACKs and data back to A (and the switch was switching) so
> fast, that A could not really deliver the data the application needed to
> send effectively. (B and C could transmit when they were receiving as
I suppose you could make a case that the data was streaming to B and C so
fast that the acks could not make it back to A because A was sending data
to B or C and hogging the wire in that direction. Not likely, but maybe
> where A could not). When reversed (B and C half and A full), the late ACKs
> and retransmissions were due to A sending data as fast as it could (not
> CS-ing and not looking for collisions) and B and C behaving like
> traditional Ethernet. Not transmitting when receiving and observing
> collisions. Therefore, the ACKs never made it back to A (or late) and A
> intern figured there was a problem.
The problem with this theory is that it appears that B and C were each inolved
with a single session with A. This means that they would have no reason to
send an ack until they received all the data anyway, thus it would essentiall
be a half duplex conversation at the tcp level. Again, it is much more likely
that the packets were dropped due to the mismatch.
> My suggestion to the vendor and customer, who was having the problem: Set
> all PCs to half or set all PCs to full. Because the PCs had different NIC
> manufacturers and the link between the switches was 100Mbs (the same speed
> as all the PCs plugged into them) I leaned towards all being half.
> I would suggest only using full duplex between ports when you have...
> A) A dedicated one to one relationship that you can identify, for example:
> A to B transmitting a lot of data back and forth or in the above example A,
> B and C. (All devices plugged into a switch).
> B) All workstations running full duplex plugged into switches.
> C) Router to switch.
> D) Switch to switch.
> I would try not to have a situation where I had a mix and match of full and
> half on workstations/servers in a single vlan. Only exception is maybe if
> you have a many to one relationship (like 100 workstations /half/ to 1
> server /full/ all the same speed). Or other high utilization hosts such as
> a router interface mentioned above.
> If you have 100Mbs/full to 10Mbs/half in the same vlan really opens up a
> situation where the 10Mbs workstations can get blasted and not operate
This is a very common configuration, and it should work just fine.
> We set all the PCs to half and the application performed very well. I
> expect if everything in the Vlan was set to full, it would have also...
> however I had concerns about oversubscribing the cross connect, between the
> switches and whether or not the PCs would really benefit anyway, due to the
> nature of the application.
This tells me that the switch was auto negotiating to half. The real
test would be to set A, B and C to full and see what happens. If my
theory is correct, the performance should tank.
Also, if you carry your other theories over to this situation, you should
still have problems. You stil have still eliminated CSMA/CD from the
equation here in that you have multiple switched connections, talking
to the one. There is still have B and C sending acks at the same time
data is being sent from A.
> The bottom line is (IMHO) full duplex is not necessarily better that half,
> you really have to analyze what it is you are trying to accomplish and
> where your traffic flows are.
I am in agreement here. Full only helps when you have symmetric traffic
patterns. This would not appear to be the case in your application.
But, it should not hurt either, unless something is malfunctioning or
> I would welcome any other experiences and/or comments with mix and match
> full/half duplex. For my customer will want a detailed explanation of what
> happened to their application, so they can take it back to the vendor.
> Anand, in your particular situation I would try running the server at
> 100Mbs/half. I would imagine you will see the 10Mbs clients problems
> lessen/disappear and the backup across the network remain pretty quick.
> Charlie Fenwick
> RPM Consulting
> On Tuesday,
> November 24, 1998 1:34 PM, Anand Modak [SMTP:amo...@mcps.k12.md.us] wrote:
> > I have a Cisco Cat5000 running 3.2.1b. Plugged into this switch is a
> > server that has a Netflex 10/100 Autosensing NIC and the latest NIC
> > The server runs Oracle on NT. When the server and switch are set to
> > Full Duplex a client connection to the server is very slow. Switch port
> > stats show a large number of runts. Sniffer analyzer trace shows lots of
> > retransmissions.
> > Now I set the connection on both sides to 10Mbits Half Duplex and the
> > zooms along - no runts, few retransmissions. But although the client
> > smoother, strangely enough, an NT backup from this server across the
> > increases in time from 1 1/2 hours to 4 1/2 hours.
> > My initial guess is that the physical connection is not adequate for Fast
> > Ethernet. But its 10 feet from the switch to the server. Is it possible
> > that a straight through patch cable is not adequate because it doesn't
> > use 2 pair for 1 2 3 and 6? Any other ideas? I have tried letting one
> > or the other side autosense, and I've tried forcing both sides to agree
> > on speed and duplex.
> > tia
> > anand
> > Anand Modak WAN Engineer Montgomery County Public Schools
> > 850 Hungerford Dr, Rockville MD 20850 (301)517-8233 amo...@mcps.k12.