ufs log size - performance

ufs log size - performance

Post by Martin Pau » Thu, 11 Jul 2002 21:43:10



Here's what I noticed when doing some ufs/disk performance comparisons.
The test was done on a Sun Fire V880, ST296605FC disk (36 GB), Solaris 9.

I've measured the "real" time from ptime when untarring a ca. 64 MB
tar archive ("tar xf /tmp/108528-15.tar"). Three runs for each test
environment. I won't call that a "benchmark", it's just an access pattern
that might happen regularly (for me), and more appropriate than dd/mkfile
for e.g. home directory disk space.

I wanted to measure the difference between ufs without and with logging,
on differently sized partitions (1 GB, 8 GB, 16 GB, 32GB). Results:

                     Run 1:   Run 2:   Run 3:
   1 GB, no logging: 28.391 / 30.334 / 28.548
   8 GB, no logging: 33.438 / 31.391 / 29.693
  16 GB, no logging: 30.715 / 32.718 / 29.805
  32 GB, no logging: 32.224 / 34.789 / 32.085

   1 GB,    logging: 43.203 / 43.364 / 44.685
   8 GB,    logging: 19.143 / 18.741 / 17.521
  16 GB,    logging: 11.211 /  8.414 /  8.081
  32 GB,    logging: 17.193 /  7.697 /  7.202

What I read from this:

If logging is not used, the times are equal for each run, and don't change
with the partition size. That's the expected result.

If logging is enabled, the size of the partition makes a huge difference.
I assume that it's really the size of disk space used for logging internally
by ufs. According to the man page for mount_ufs it's 1/1024 of the partition
size, with a max. of 64 MB (between 1 MB and 32 MB in the above example).
For very small partitions, ufs with logging seems to be slower than ufs
without logging. I don't think it's possible to set the ufs log size
manually, but I think it would be a good idea if either this was possible,
or a larger log size is used by default - even on a 1 GB partition one
can afford to lose 32 or 64 MB of diskspace for the log. It surely depends
on the access pattern (number/size of files written), but a larger log size
should make things not go slower in all cases, and faster in some. Am I
wrong ?

BTW, one can see that the first write after mounting the file system
with logging is always slower than the followings. I assume that's
because the space for the log buffer has to be allocated at first. Or
is/should this be done right after mounting ?

mp.
--
                         Martin Paul | Systems Administrator

       University of Vienna, Austria | http://www.par.univie.ac.at/

 
 
 

1. UFS logging VS Solstice DiskSuite's Trans metadevice "UFS logging"

Does anyone know the difference from mounting a UFS file system with
the options "logging" and using Solstice DiskSuite's Trans metadevice
to do UFS logging.

Is it the same, or using DiskSuite more reliable?   I've been using
DiskSuite (ufs logging) for a few years now and I've never have a
problem with any file system from a hard crash or "power off".  I
noticed the other day when glancing over the mount_ufs man page that
there was a "logging" options that says the same stuff of what a trans
metadevice does, but was curious if it is proven to have the same
reliability.

Any info is appreciated.

-- Shawn

2. Connections to my machine hang

3. Solaris 8 ufs logging performance

4. lightweight shell

5. ufs logging kills linking performance

6. kermit and zmodem (rz and sz) for Solaris 2

7. how to tar up only directories...

8. Solaris/UFS logging performance issue.

9. ? UFS logging size in DiskSuit

10. UFS logging - how much space can be logged ?

11. UFS logging vs meta device logging

12. Sharing Logs with ODS + UFS Logging