page density 1% !

page density 1% !

Post by Chris Adam » Fri, 04 Oct 2002 18:50:25



Hi all,
Our production server is currently running SQL Server 7.0
sp3. We have a table that is inserted to, and deleted from
all the time. Data is selected from it as well. There is a
non clustered index on the primary key fields (2). We have
a few stored procedures that insert a bunch of records
into this table, select and insert those records into
another table, then delete these records when the insert
into the second table has completed successfully. This
process is enclosed in a transaction block.

What I've been seeing is the size of the table grows
continuously over a period of time, when I actually have
to drop / recreate the table to keep it manageable. I've
run a dbcc showcontig and find that the scan density is
usually up around 98%, but the average page density is
always under 2%! It's almost as though there is only a few
records per data page. The record size is only 217 bytes
(not including overhead), so I don't think this is the
reason. However, in trying to find more information about
internal fragmentation, which is what the page density
represents, I've only found information that indicates
that this is caused by a large record size.

Because there is no clustered index on this table, records
are inserted to a heap. Putting a clustered index on the
table seems to clear up the continuous table growth, but I
am concerned that many concurrent processes trying to
insert records will create data page 'hot spots',
affecting the performance of the stored procedures that
are involved.

I would appreciate any insight into this behavior. Records
are constantly being added and then deleted from the
table. According to the dbcc showcontig output, it appears
that it's the data pages that are fragmented. Is this true?
TIA

 
 
 

page density 1% !

Post by Andrew J. Kell » Fri, 04 Oct 2002 19:30:21


Chris,

It is doubtful you will see hot spot issues with the CI but you need to
ensure the column(s) you place the CI on won't cause a lot of page splits.
If the value of the CI is a range type or an increasing value (such as
identity etc) you will avoid the page splits.  Ideally this value would be
the one you use to delete the rows as well as avoiding the page splits.

--
Andrew J. Kelly   SQL MVP
Targitinteractive, Inc.

Check out the PASS Community Summit - Seattle, the largest and only user
event entirely dedicated to SQL Server, November 19-22.
http://www.sqlpass.org/events/seattle/index.cfm


Quote:> Hi all,
> Our production server is currently running SQL Server 7.0
> sp3. We have a table that is inserted to, and deleted from
> all the time. Data is selected from it as well. There is a
> non clustered index on the primary key fields (2). We have
> a few stored procedures that insert a bunch of records
> into this table, select and insert those records into
> another table, then delete these records when the insert
> into the second table has completed successfully. This
> process is enclosed in a transaction block.

> What I've been seeing is the size of the table grows
> continuously over a period of time, when I actually have
> to drop / recreate the table to keep it manageable. I've
> run a dbcc showcontig and find that the scan density is
> usually up around 98%, but the average page density is
> always under 2%! It's almost as though there is only a few
> records per data page. The record size is only 217 bytes
> (not including overhead), so I don't think this is the
> reason. However, in trying to find more information about
> internal fragmentation, which is what the page density
> represents, I've only found information that indicates
> that this is caused by a large record size.

> Because there is no clustered index on this table, records
> are inserted to a heap. Putting a clustered index on the
> table seems to clear up the continuous table growth, but I
> am concerned that many concurrent processes trying to
> insert records will create data page 'hot spots',
> affecting the performance of the stored procedures that
> are involved.

> I would appreciate any insight into this behavior. Records
> are constantly being added and then deleted from the
> table. According to the dbcc showcontig output, it appears
> that it's the data pages that are fragmented. Is this true?
> TIA


 
 
 

1. FILLFACTOR (SCAN DENSITY VS. AVERAGE PAGE DENSITY)

Scenario:
 Created a clustered index and nonclustered indexes on a
table with 5,953,675 rows using a fillfactor of "95" for
clustered index and "98" for nonclustered indexes.
  After 82,119 inserts (.01 of table), the nonclustered
indexes had a value of "83" for scan density, "4" for
logical frag, and "96" for avg. page density.  The
clustered index had a value of "96" for scan density, "1"
for logical frag, and "88" for avg. page density.
  Methinks I should have used a fillfactor for the
nonclustered indexes a value of "96", and a fillfactor
value of "96" for the clustered index.  The table and
indexes span multiple files.  Confused on trade-off on
scan density versus avg. page density with nightly inserts.
Am in an OLAP (Data Warehouse) environment.  Have other
tables with similiar scenarios (roughly one percent of
table being done nightly with inserts).
  Am open to anything.  Thank you

2. getting list of built-in function from system catalog

3. Reducing the Avg Page density of a page

4. e

5. Avg. Page Density Issue with Copy Objects

6. newbie question: hide the headers

7. Avg. Page Density Change if there is no index on a table.

8. sql-server and autocounter as primarykey

9. Density figures on distribution page

10. Inex Distribution and Density pages

11. Density Calculation on the statistics page.

12. Scan Density