RH6.2

RH6.2

Post by Matt & Ang » Sun, 31 Dec 1899 09:00:00



Hi all

What are the changes from 6.1 to 6.2?

Are they significant?

Thanks for any responses
Matt Stibbard

 
 
 

RH6.2

Post by Prasanth Kuma » Sun, 31 Dec 1899 09:00:00



> Hi all

> What are the changes from 6.1 to 6.2?

> Are they significant?

> Thanks for any responses
> Matt Stibbard

Go to Redhat's website and read the first chapter of each of the
manuals for a details description. The main additions seem to be
inclusion of security and encryption software like kerberos and
gnupg.

--
Prasanth Kumar


 
 
 

1. Comparing NT4 & RH6.x TCP/IP4 stack latencies; performance probs w/ RH6.x

A research group at Cal Poly (SLO) has been investigating the network
performance of various client PC's.  We have developed instrumentation to
measure the latency time for a data payload to be transmitted with socket
calls via the TCP/IP4 stack to a 3Com 3c905C-TX-M FastEthernet NIC.  The
instrumentation consists of a logic analyzer interfaced to the parallel port
and the PCI bus of the PC under test.  We have performance test results for
both the Windows NT 4.0 (SP3) and Redhat Linux 6.x operating systems.  The
details of the instrumentation and performance testing technique can be
found in the following PDF file:

    http://www.ee.calpoly.edu/~jharris/senior%20projects/Jim_Fischer.pdf

We have observed some interesting behavior in the RH6.x TCP/IP4 stack
results. Specifically, the payload start and stop layency results -- i.e.,
the time it takes for the first and last bytes of the data payload to reach
the PCI bus -- yield a performance graph that presents a pulse train
characteristic. (This pulse train does not appear when the NT4 results are
graphed.) This behavior is also repeatable (i.e., we get virtually the same
results each time we run the tests). Some examples of the test results we've
obtained can be found at:

    http://www.ee.calpoly.edu/~jfischer/

As can be seen in these results, the RH6.x stop latency time frequently
peaks out at ~0.166 seconds for particular payload sizes. For other payload
sizes, the RH6.x and NT4 TCI/IP4 stacks exhibit similar performance
characteristics.

We suspect that the pulse-shaped RH6.x response is being caused by some
scheduling mechanism in the kernel, or by an inadequately configured TCP/IP
buffer cache, etc., but are not exactly sure.  We would appreciate any
comments on these results, as well as any suggestions for tuning the RH6.x
kernels / TCP/IP stacks to improve their performance. (FWIW, we are
currently working with RH6.2, kernel version 2.2.14-5.0.)

Jim Fischer
MSEE Grad Student
Cal Poly (SLO)

2. Screensaver capture

3. Upgrade RH6.1 to RH6.2, locks on module load

4. R5 install problems on Motorola PowerStack

5. moving password files from one rh6.2 machine to another rh6.2 machine

6. Linux META-FAQ (part 1/1)

7. Fail to install RH6.0 via Winnt-FTP reading from RH6.0 CDROM?

8. mailx & setgid problem

9. Bonding broke with RH6.0 to RH6.1?!

10. Workaround for RH6.0 - RH6.1 Upgrade Problem

11. RH6.1 better than RH6.0 ?

12. Procmail broke upgrading from RH6.0 -> RH6.1

13. ppp0 default route after upgrading RH6.0->RH6.1