I am using traffic control with a 2.6.26 kernel. During my testing, I
am intentionally sending in more best effort traffic than my hardware
can handle and I am seeing that the traffic control is dropping it, as
it should. Occasionally, I see a dip in the traffic in the queues I
have defined for higher priority traffic. This traffic is running
below the minimum rate. When this happens, I see the following qdisc
statistics (sampling every 3 seconds):
qdisc htb 9997: root r2q 10 default 9999 direct_packets_stat 0
Sent 299188596 bytes 197913 pkt (dropped 162746, overlimits 377808
requeues 0)
rate 0bit 0pps backlog 0b 999p requeues 0
qdisc htb 9997: root r2q 10 default 9999 direct_packets_stat 0
Sent 300330198 bytes 198669 pkt (dropped 163401, overlimits 379300
requeues 0)
rate 0bit 0pps backlog 0b 1000p requeues 0
qdisc htb 9997: root r2q 10 default 9999 direct_packets_stat 0
Sent 300818616 bytes 198993 pkt (dropped 163684, overlimits 380146
requeues 0)
rate 0bit 0pps backlog 0b 571p requeues 0
qdisc htb 9997: root r2q 10 default 9999 direct_packets_stat 0
Sent 301102914 bytes 199182 pkt (dropped 163684, overlimits 380626
requeues 0)
rate 0bit 0pps backlog 0b 81p requeues 0
qdisc htb 9997: root r2q 10 default 9999 direct_packets_stat 0
Sent 303284730 bytes 200625 pkt (dropped 163684, overlimits 382126
requeues 0)
rate 0bit 0pps backlog 0b 753p requeues 0
qdisc htb 9997: root r2q 10 default 9999 direct_packets_stat 0
Sent 304795218 bytes 201624 pkt (dropped 164095, overlimits 383623
requeues 0)
rate 0bit 0pps backlog 0b 1000p requeues 0
As you can see, backlog goes from 999p to 1000p to 571p to 81p to
753p to 1000p. Is this dip in the backlog to be expected? Does
traffic control occasionally flush the backlog if it is staying around
1000p? I have verified that the incoming data, generated by iperf, is
constant and does not dip like this. Note: while the backlog stays
well under 1000p, the number dropped stays at 163684. Also, how can
backlog be 1000p but 0b? (I assume that means 1000 packets and 0
bits.)