Largest chunk size < largest extent size?!? (fwd)

Largest chunk size < largest extent size?!? (fwd)

Post by Will Hartung - Master Rallyei » Tue, 13 Apr 1993 05:33:06

> Subject: Largest chunk size < largest extent size?!?
> Date: Fri, 9 Apr 93 13:40:32 GMT

> . . . .

> Now following the advice of the online admin manual, which says "make your
> chunks large" (since you can have only a small number of them), I set up
> the raid array to be one logical disk as 4Gb (it's 5 x 1.2Gb disks, with
> 1/5 of it used for parity information).

> (lots -o- questions deleted)

Hey folks, can we possibly expand this into a more general "Gosh, I've
got 4 Gig of data to manage, where do I start!".

We have a system that we're designing that will be hitting at LEAST 4
Gig in a few months. Things start taking a LONG time when the system
gets that big, plus we're pushing the limits of archive mediums.

I also remember hearing horror stories about reindexing million row tables

So, just some general ideas about managing LARGE volumes of data would
be appreciated.

This system may use OnLine 5.0, preferably 5.01 if it's out when we
are ready, otherwise we'll just stick with 4.1.



1. Largest chunk size < largest extent size?!?

->Subject: Largest chunk size < largest extent size?!?
->Date: Fri, 9 Apr 93 13:40:32 GMT

->Organization: University of Denver, Dept. of Math & Comp. Sci.
... omitted ...
->This is brand new, so I'm just initializing the db on there for the first time.
->I go into tbmonitor, do param/init, enter the size of the raid array
->(which is 4620858K) and it says (when I try to leave the offset at 0),
->that this is larger than the max size for a chunk.
->Tech support says chunks can only be 2^20 pages (1048576), and since our
->page size is 2k, that's 2Gb for a max size chunk.  That is, online uses a
->20 bit number for the page# in a chunk.
->Umm, guys, page 2-102 of the online 5.00 admin guide seems to imply otherwise,
->saying that the max size of an extent is 16M pages (or 32Gb), and since
->an extent must be contiguous disk space, it must reside on one chunk, so
->one chunk, by implication, must be at least this large.  [1]
... omitted ...


Hello, Andrew,

Sorry I can't help you with your Online problem; but I do take issue with
the inference you drew at the point I marked [1].

Sometimes when you are working with related maxima in real, as opposed to
theoretical, systems, you run into cases where the logic you have followed
won't work.  It is possible that the extent size max is a looser constraint
than the chunk size max, even tho' an extent must be contained in a chunk.
This is more of an implementation gotcha, than an error.  The extent size
max may be designed for far into the future, while the chunk max (which is
closer to the hardware) is constrained by current technology or by design

I do agree with you that the documentation should be better.  A statement
like the following would go a long way toward avoiding this confusion:

"Extent size is limited to <value> and may be further limited by physical
implementation constraints.  Refer to <some manual> for the physical
limitations of your system."


| Martin Marietta, LSC         | ( Please note: My opinions do not   ) |
| P.O. Box 179, M/S 5422       | ( represent official Martin policy. ) |
| Denver, Colorado 80201-0179  | Voice: 303-977-9998                   |

2. About the use o collate on Table or Database(MS sql server 7.0/2000)

3. Converting Application

4. myschema - extent size looks very large

5. TechTips: Database corruption - signs and symptoms

6. Initial extent size < first extent

7. Database performance evauluation

8. Large difference between backup size and db size

9. extent size and segment size

10. autoextend size vs. extent size in LMT

11. Extents and next extent size

12. Extent Size and Total amount of Extents