Stupid (?) NFS performance question.

Stupid (?) NFS performance question.

Post by david parso » Mon, 19 Jul 1999 04:00:00



I'm trying to convert some fileservers from Linux to FreeBSD (because
FreeBSD supports the Netgear ga620 and Linux, at least in a stable
kernel, doesn't) and I've encountered a distressing behavior:

When I do some rudimentary nfs speed benchmarks on a Linux client using
a nfs share from the FreeBSD box, the performance is not particularly
good.   When I use a Linux server, I get ~ 1m second reads and writes
across a 100mbit fabric, but the FreeBSD server gives me ~ 2m reads
and, umm, 65k writes across a 100mbit<->1gbit fabric.

The machines are moderately high-end, or would have been about a year
ago:
        Linux:  394mb core, FIC PA-2013 mb, AMD K6-2-400,
                2x IBM DTTA-351350s (1 per channel)
        FreeBSD 3.2R: 256mb core, FIC PA-2013 mb, AMD K6-3-450,
                3x IBM DTTA-351680 (1 on primary, 2 on secondary),
                all with ffs+softupdates+quota, and the above-
                mentioned ga620

Trying to mount the linux client with large blocksizes
(rsize=wsize=8192) doesn't seem to do any difference to the fairly
pathetic write performance.   I've done a cursory search of the handbook
and the FAQ, but have come up empty, so...

Am I missing something obvious here?

                  ____
    david parsons \bi/ Please don't send email, but respond on the group.
                   \/

 
 
 

Stupid (?) NFS performance question.

Post by Blaz Zup » Mon, 19 Jul 1999 04:00:00



>When I do some rudimentary nfs speed benchmarks on a Linux client using
>a nfs share from the FreeBSD box, the performance is not particularly
>good.   When I use a Linux server, I get ~ 1m second reads and writes
>across a 100mbit fabric, but the FreeBSD server gives me ~ 2m reads
>and, umm, 65k writes across a 100mbit<->1gbit fabric.

Check the duplex setting on the card and on the switch. They have to agree, i.e.
when one end thinks it is half-duplex and the other thinks it is full-duplex,
things like you describe above will happen. If you have set autonegotiation, try
to hardcode the settings both on the switch and the FreeBSD box. You can set a
100Mb interface to full-duplex under FreeBSD with:

ifconfig xxx media 100baseTX mediaopt full-duplex

--

Medinet d.o.o., Linhartova 21, 2000 Maribor, Slovenia

 
 
 

Stupid (?) NFS performance question.

Post by david parso » Mon, 19 Jul 1999 04:00:00





>>When I do some rudimentary nfs speed benchmarks on a Linux client using
>>a nfs share from the FreeBSD box, the performance is not particularly
>>good.   When I use a Linux server, I get ~ 1m second reads and writes
>>across a 100mbit fabric, but the FreeBSD server gives me ~ 2m reads
>>and, umm, 65k writes across a 100mbit<->1gbit fabric.

>Check the duplex setting on the card and on the switch. They have to agree, i.e.
>when one end thinks it is half-duplex and the other thinks it is full-duplex,
>things like you describe above will happen. If you have set autonegotiation, try
>to hardcode the settings both on the switch and the FreeBSD box. You can set a
>100Mb interface to full-duplex under FreeBSD with:

    They're both running full-dup, according to the blinking lights and
    ifconfig, but I'll certainly try hand-setting the devices later
    today.   The ifconfig for ti0 is:

    ti0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
            inet 192.156.98.54 netmask 0xffffff00 broadcast 192.156.98.255
            ether 00:a0:cc:73:38:d5
            media: autoselect (1000baseSX <full-duplex>) status: active

    (I've also tried doing bonnie testing from a largish SGI host:
    500k read, 500k write with rsize=wsize=8192.  Ugh.)

                  ____
    david parsons \bi/         With rsize=wsize=8192, I get 6mb/sec read speeds,
                   \/  which is MUCH nicer.  Now if I can only get nonlaughable
                                                                   write speeds.

 
 
 

Stupid (?) NFS performance question.

Post by Uwe Laveren » Wed, 21 Jul 1999 04:00:00


Hi David,

I had similar problems when I used Linux-Clients (up to 2.0.36) and a
Digital-Unix NFS-Server. AFAIK it's a common problem with Linux-NFS and
even with some help from DECs technical support I was not able to get a
reasonable write-performance.

This was one of the reasons why I switched all of our Linux-machines to
FreeBSD, wich performs very well as NFS-Client. The last thing I heard
is that the Linux-developers are working heavily on their
NFS-implementation, so you might check one of the recent Linux-kernels.
You could also think of switching your clients to FreeBSD. Although I
like Linux, FreeBSD was the right solution for me, not only because of
this NFS-problem.

Uwe

Quote:> I'm trying to convert some fileservers from Linux to FreeBSD (because
> FreeBSD supports the Netgear ga620 and Linux, at least in a stable
> kernel, doesn't) and I've encountered a distressing behavior:

> When I do some rudimentary nfs speed benchmarks on a Linux client using
> a nfs share from the FreeBSD box, the performance is not particularly
> good.   When I use a Linux server, I get ~ 1m second reads and writes
> across a 100mbit fabric, but the FreeBSD server gives me ~ 2m reads
> and, umm, 65k writes across a 100mbit<->1gbit fabric.

 
 
 

Stupid (?) NFS performance question.

Post by david parso » Thu, 22 Jul 1999 04:00:00




Quote:>Hi David,

>I had similar problems when I used Linux-Clients (up to 2.0.36) and a
>Digital-Unix NFS-Server. AFAIK it's a common problem with Linux-NFS and
>even with some help from DECs technical support I was not able to get a
>reasonable write-performance.

>This was one of the reasons why I switched all of our Linux-machines to
>FreeBSD, wich performs very well as NFS-Client.

   This, alas, is not feasable in my case, because there are business
   (the company sells Linux versions of their software) and political
   (I'm a long-term Linux advocate) reasons.

   And I've also measured similarly pathetic performance from our big
   Irix machines  (a 1gb bonnie run on a big dual-processor challenge
   gave, after a few hours, 512k/sec reads and 512k/sec writes on the
   same 100mbit fabric), which makes me think that it's something more
   subtle than a Linux-freeBSD interaction.

                 ____
   david parsons \bi/ Maybe if I blow away softupdates and hack nfsd...
                  \/

 
 
 

Stupid (?) NFS performance question.

Post by Matthias Schuendehuet » Tue, 03 Aug 1999 04:00:00




Quote:

> I'm trying to convert some fileservers from Linux to FreeBSD (because
> FreeBSD supports the Netgear ga620 and Linux, at least in a stable
> kernel, doesn't) and I've encountered a distressing behavior:

Good decision.... :-)

Quote:> When I do some rudimentary nfs speed benchmarks on a Linux client using
> a nfs share from the FreeBSD box, the performance is not particularly
> good.   When I use a Linux server, I get ~ 1m second reads and writes
> across a 100mbit fabric, but the FreeBSD server gives me ~ 2m reads
> and, umm, 65k writes across a 100mbit<->1gbit fabric.

On FreeBSD all NFS-exports are synchronous by default (under Linux it's
the opposite). So NFS-writes are not buffered in any way and the NFS-Client
has to wait for the write-commitment until the data is really (!) on the disk.
That's the only behaviour that's permitted by the NFS-specs.

Anyway - if you're willing to live risky with your data (and not FreeBSD
is the problem here but the disks and the interface/cabling - do yourself
a favour and use SCSI ...) you're able to switch to asynchronous writes, which
speed up NFS-writes significantly.

That's done with the 'sysctl'-command. Enter

        sysctl -w vfs.nfs.async=1

(and BTW read 'man sysctl')

I compared a FreeBSD-Box with a K6-200 and 64MB RAM against an IBM RS/6000
43P/332MHz with 256MB RAM running AIX 4.3.2 and got comparable performance
if using asynchronous writes with soft-updates. AIX has also async writes
by default (as has SGI, HP optional). All with SCSI-drives of course :-).

Quote:

> The machines are moderately high-end, or would have been about a year
> ago:
>    Linux:  394mb core, FIC PA-2013 mb, AMD K6-2-400,
>            2x IBM DTTA-351350s (1 per channel)
>    FreeBSD 3.2R: 256mb core, FIC PA-2013 mb, AMD K6-3-450,
>            3x IBM DTTA-351680 (1 on primary, 2 on secondary),
>            all with ffs+softupdates+quota, and the above-
>            mentioned ga620

IDE-Disks with Busmaster-DMA switched on? See /usr/src/sys/i386/config/LINT.

Quote:

> Trying to mount the linux client with large blocksizes
> (rsize=wsize=8192) doesn't seem to do any difference to the fairly
> pathetic write performance.   I've done a cursory search of the handbook
> and the FAQ, but have come up empty, so...

> Am I missing something obvious here?

>                   ____
>     david parsons \bi/ Please don't send email, but respond on the group.
>                    \/

Hope I could help - Matthias
 
 
 

1. --stupid question but then there are no stupid questions--

        I'm a beginner here so, so far everything i read has gone
    way over my head. My problem is I got a script sent to me
    decoded or something like that. You are supposed to enter the
    data into a uudecode or something but I'm not sure how that's
    done. There's a program on the university's main menu that
    called UUDECODE but everytime I run it through it, it says it
    can't open the file. Comments ??????? Help ?

        eric

2. Two easy questions

3. (perhaps) stupid NFS question

4. comp.unix.sco Technical FAQ (2/3)

5. Stupid or not so stupid questions

6. mounting ide/atapi cd-rom

7. Stupid, stupid question

8. MetroX w/ ATImach64 WinTurbo ???

9. One stupid question and one not so stupid

10. REALLY STUPID QUESTIONS (not so stupid)

11. Stupid, stupid, stupid...

12. Stupid, stupid, stupid! FAT access rights...

13. stupid choice = another stupid newbie question