Universe: ANALYZE.FILE question

Universe: ANALYZE.FILE question

Post by Chuck Hamilt » Sun, 19 Mar 1995 02:11:43



Where does Universe get the "Number of data bytes" value from when you
run ANALYZE.FILE? I've tried totalling up the length of each key, plus
the length of each record, plus 2 (one for the end-of-record mark and
one for the end-of-field mark between the key and the record) but come
up with a substantially lower number.

        ><> Chuck Hamilton <><

 
 
 

Universe: ANALYZE.FILE question

Post by Jeff Jann » Sun, 19 Mar 1995 06:10:27




>Where does Universe get the "Number of data bytes" value from when you
>run ANALYZE.FILE? I've tried totalling up the length of each key, plus
>the length of each record, plus 2 (one for the end-of-record mark and
>one for the end-of-field mark between the key and the record) but come
>up with a substantially lower number.

>    ><> Chuck Hamilton <><

You also need to add in length bytes, offset bytes, header blocks, and
other internal bytes that aid the engine in scanning the file/groups.
Oh, almost forgot the biggy: dead bytes, i.e. those copies of records which
have been deleted or modified such that they could not be written in the
same space they were before or leftover bytes if new record is smaller than
before.  In the first two cases, the record is just flagged as deleted &
removed later at a more opportune time.  (not sure if the bytes get re-used
if subsequent smaller records get added to the group.)  More ooprtune time
is usually a resize, though depending on version, it might compress groups
on-the-fly so-to-speak.  So go ahead & trust the value that ANALYZE.FILE is
giving you.  It's 99% likely to be correct (100% in no bugs, right? :).
--
Jeffrey W. Janner     |  "There's no good idea that's so good you can't ruin |


 
 
 

Universe: ANALYZE.FILE question

Post by David Ho » Thu, 23 Mar 1995 12:41:59




Quote:>Where does Universe get the "Number of data bytes" value from when you
>run ANALYZE.FILE? I've tried totalling up the length of each key, plus
>the length of each record, plus 2 (one for the end-of-record mark and
>one for the end-of-field mark between the key and the record) but come
>up with a substantially lower number.

>    ><> Chuck Hamilton <><

ANALYZE.FILE analyzes uniVerse dynamic hashed-linear files (type 30).

These "dynamic" file types increases the files' modulus (number of groups)
based on the MERGE.LOAD and SPLIT.LOAD factors - basically (actually a little
more complex than that). These factors, as their name implies govern the
percentage that the individual groups may grow to before the are split or
conversely merged with other groups in the file. The default values for
MERGE.LOAD is 80% and SPLIT.LOAD is 20% (if I recall correctly). This implies
that by default a dynamic files' groups will be 80% empty. Hence your
descrepancy - simply put anyway - your file may differ from this generalisation.

David.

 
 
 

1. universe on rs6000 to universe NT question

What is the best way to get an account from one machine to another if i do not
have compatible tape drives?  Should I go into the admin and create new device
type "other" (That's about as far as I got for filling in the blanks) do a save
to this device (ie.virtual) then on the NT do a restore once I have ftp'd (via
WS_FTP) the saved file?
Is there a different way? If this is an acceptable way could anybody give me a
road map?

Thanks

2. GA-ORACLE INFORMATION TECHNOLOGY

3. Question on UniVerse file sizes

4. Hooking BDE for modifications made by another application

5. stored procedure help.

6. Universe file to file comparison

7. Is Perl compiled into Version?

8. Analyzing log file from backup

9. Error Analyzing Workload file(62)

10. analyzing .DAT file

11. analyzing the trace file

12. Program to analyze FM Pro's web.log file