Problem with table size

Problem with table size

Post by Ian But » Fri, 23 May 1997 04:00:00



Dear Oracle Users,

PROBLEM
~~~~~~~
        I have a problem with a table that continues to increase in size. Rows
are deleted and new rows are inserted keeping the total number of rows
about the same but the space occupied by the table continues to
increase. I think the problem relates to my use of a long datatype in
the table. If you think you can help please read the details below.

DETAILS
~~~~~~~
        I am using Oracle7 Server Release 7.3.2.3.0 - Production Release on an
IBM RS/6000 running AIX 4.1.

        A number of tables have this problem and all include a long datatype.
An example of the structure of one of these table follows:

        SQL> desc text_tbl;
        Name                            Null?    Type
        ------------------------------- -------- ----
        MESSAGE_ID                      NOT NULL CHAR(20)
        MESSAGE_TYPE                    NOT NULL NUMBER(4)
        ORIGINATOR                               CHAR(40)
        STORED_DT                       NOT NULL DATE
        MESSAGE_TEXT                    NOT NULL LONG
        VERSION                         NOT NULL NUMBER(3)

        The table is modified by a C application using embedded SQL.

        The number of rows per MESSAGE_ID is controlled by the VERSION column,
the maximum value allowed for VERSION is held in a separate table. When
VERSION reaches the maximum number it returns to 1, deletes the current
row for MESSAGE_ID with VERSION equal to 1 and the inserts the new row
and the MESSAGE_TEXT associated with it. VERSION numbers continue to be
incremented from that value.

        The application updating this table does not seem to have any problems
nor do the applications that read the message text fields.

        An analysis of the table shows that is has:

        Rows:                     29014
        Minimum MESSAGE_TEXT:         1 byte
        Maximum MESSAGE_TEXT:     14660 bytes
        Total MESSAGE_TEXT:     2885853 bytes

        For this application the long is expected to handle only up to
30k bytes. The table is updated frequently (about 27500 times in 24
hours) and because the maximum number of versions has been reached for
most MESSAGE_ID's involves the delete/insert mentioned above.

        Despite the fact that the actual text contained in the table is 2885853
bytes the DBA reports the following statistics for the table:

Owner                Type       Name
Bytes   Blocks  Extents  Max Extents Initial Extent  Next Extent  Pct
-------------------- ---------- ------------------------------
----------- -------- -------- ------------ -------------- ------------
----
PROD                 TABLE      TEXT_TBL
468828160   114460      216          249       26357760     20807680
5

Number of rows in table "TEXT_TBL" is: 29014

        The table actually occupies 468828160 bytes and continues to grow!
Eventually it reaches the maximum extent allocated and something has to
be done. At the moment the DBA is just increasing the extents available.

QUESTIONS
~~~~~~~~~

        Is this normal for an Oracle table containing a long datatype?

        If this in not normal what is causing it?

        How can I fix it or avoid it?

Thank you for any responses.

Ian *
Bureau of Meteorology, Australia

 
 
 

Problem with table size

Post by Dick Alli » Fri, 23 May 1997 04:00:00



> Dear Oracle Users,

> PROBLEM
> ~~~~~~~
>         I have a problem with a table that continues to increase in size. Rows
> are deleted and new rows are inserted keeping the total number of rows
> about the same but the space occupied by the table continues to
> increase. I think the problem relates to my use of a long datatype in
> the table. If you think you can help please read the details below.

> DETAILS
> ~~~~~~~
>         I am using Oracle7 Server Release 7.3.2.3.0 - Production Release on an
> IBM RS/6000 running AIX 4.1.

>         A number of tables have this problem and all include a long datatype.
> An example of the structure of one of these table follows:

>         SQL> desc text_tbl;
>         Name                            Null?    Type
>         ------------------------------- -------- ----
>         MESSAGE_ID                      NOT NULL CHAR(20)
>         MESSAGE_TYPE                    NOT NULL NUMBER(4)
>         ORIGINATOR                               CHAR(40)
>         STORED_DT                       NOT NULL DATE
>         MESSAGE_TEXT                    NOT NULL LONG
>         VERSION                         NOT NULL NUMBER(3)

>         The table is modified by a C application using embedded SQL.

>         The number of rows per MESSAGE_ID is controlled by the VERSION column,
> the maximum value allowed for VERSION is held in a separate table. When
> VERSION reaches the maximum number it returns to 1, deletes the current
> row for MESSAGE_ID with VERSION equal to 1 and the inserts the new row
> and the MESSAGE_TEXT associated with it. VERSION numbers continue to be
> incremented from that value.

>         The application updating this table does not seem to have any problems
> nor do the applications that read the message text fields.

>         An analysis of the table shows that is has:

>         Rows:                     29014
>         Minimum MESSAGE_TEXT:         1 byte
>         Maximum MESSAGE_TEXT:     14660 bytes
>         Total MESSAGE_TEXT:     2885853 bytes

>         For this application the long is expected to handle only up to
> 30k bytes. The table is updated frequently (about 27500 times in 24
> hours) and because the maximum number of versions has been reached for
> most MESSAGE_ID's involves the delete/insert mentioned above.

>         Despite the fact that the actual text contained in the table is 2885853
> bytes the DBA reports the following statistics for the table:

> Owner                Type       Name
> Bytes   Blocks  Extents  Max Extents Initial Extent  Next Extent  Pct
> -------------------- ---------- ------------------------------
> ----------- -------- -------- ------------ -------------- ------------
> ----
> PROD                 TABLE      TEXT_TBL
> 468828160   114460      216          249       26357760     20807680
> 5

> Number of rows in table "TEXT_TBL" is: 29014

>         The table actually occupies 468828160 bytes and continues to grow!
> Eventually it reaches the maximum extent allocated and something has to
> be done. At the moment the DBA is just increasing the extents available.

> QUESTIONS
> ~~~~~~~~~

>         Is this normal for an Oracle table containing a long datatype?

>         If this in not normal what is causing it?

>         How can I fix it or avoid it?

> Thank you for any responses.

> Ian *
> Bureau of Meteorology, Australia


You don't indicate the block size or pctfree and pctused params on the
table.  If rows are deleted from blocks but the space still occupied
does not fall below pctused the block will not be made available for
insertion of new rows.  From the maxextents parm of 249 I assume block
size is 4096 or 4k. try to analyze the table and look at dba_tables for
the table to see what the chain count is and the avg free space.  You
may have to export the table, truncate the table then import back in to
use the space efficiently.  It probably wouldn't be a bad idea to have a
tablespace just for this table to make it easier to monitor and update
periodically.

Hope this helps.

Dick Allie


 
 
 

Problem with table size

Post by Ken Eato » Fri, 23 May 1997 04:00:00


A couple of questions...  Is the long text field filled in when the
record is added, or is it often updated to its final value afterwards?
(Something must be placed in it because of the NOT NULL constraint).  
What is your PCTFREE set to?  How much updating occurs to the record
after it is added?  Why not use varchar2 on the other character fields?
(This is not likely your problem, but could save you some space.) Is an
export/import possible to do within your availability window?  What is
your blocksize?  I suspect that a lot of chaining is occurring, but it
is hard to tell with the information provided. What is the life of a row
before the numbers wrap and it gets deleted?  I would also create the
table with the long field as the last column...

Thanks,

 
 
 

Problem with table size

Post by Ian But » Sat, 24 May 1997 04:00:00



> A couple of questions...  Is the long text field filled in when the
> record is added, or is it often updated to its final value afterwards?
> (Something must be placed in it because of the NOT NULL constraint).

The MESSAGE_TEXT fields contains the text of weather forecasts and are
filled in when the row is added or updated. There is no such thing as a
final value as the forecasts are perishable and can also vary greatly in
length from day to day. The value in VERSION allows a brief history to
be retained, generally 5 versions which for most forecasts will last a
couple of days. Other text fields contain weather observations and are
updated at 10 minute intervals.

Quote:> What is your PCTFREE set to?

Our DBA has provided the following information from dba_tables:

 OWNER                  PROD
 TABLE_NAME             TEXT_TBL
 TABLESPACE_NAME        PRODUCTS
 CLUSTER_NAME
 PCT_FREE               10
 PCT_USED               40
 INI_TRANS              1
 MAX_TRANS              255
 INITIAL_EXTENT         26357760
 NEXT_EXTENT            20807680
 MIN_EXTENTS            1
 MAX_EXTENTS            249
 PCT_INCREASE           5
 FREELISTS              1
 FREELIST_GROUPS        1
 BACKED_UP              N
 NUM_ROWS               29030
 BLOCKS                 113384
 EMPTY_BLOCKS           1075
 AVG_SPACE              3936
 CHAIN_CNT              135
 AVG_ROW_LEN            183
 DEGREE                 1
 INSTANCES              1
 CACHE                  N
 TABLE_LOCK             ENABLED

According to the documentation I have available AVG_SPACE is the average
number of bytes free in each data block. At 3936 bytes free in a 4K
block this value is obviously too high and it appears the only a single
row is being stored in a block. I can't see how this can be caused by a
PCT_FREE of 10 (default).

Quote:> How much updating occurs to the record
> after it is added?  

As I mentioned above these rows are being continually updated at
intervals varying from 10 minutes to months or even years if the text
relates to a warning that is rarely issued.

Quote:> Why not use varchar2 on the other character fields?
> (This is not likely your problem, but could save you some space.)

Certainly varchar2 could be used.

Quote:> Is an
> export/import possible to do within your availability window?

Yes.

Quote:> What is
> your blocksize?

4k.

Quote:> I suspect that a lot of chaining is occurring, but it
> is hard to tell with the information provided.

From the above figures 135 for a chain count does not seem high in the
context of 113385 blocks in use.

Quote:> What is the life of a row
> before the numbers wrap and it gets deleted?  

As mentioned above this can vary greatly but most rows would be deleted
and inserted a couple of times a day.

Quote:> I would also create the
> table with the long field as the last column...

O.K.

Any other comments about the above information would be appreciated.
Thanks.

Ian Butt.
Bureau of Meteorology, Australia

 
 
 

Problem with table size

Post by Allen Kirb » Sat, 24 May 1997 04:00:00


<snip>

Quote:> Our DBA has provided the following information from dba_tables:

>  OWNER                  PROD
>  TABLE_NAME             TEXT_TBL
>  TABLESPACE_NAME        PRODUCTS
>  CLUSTER_NAME
>  PCT_FREE               10
>  PCT_USED               40
>  INI_TRANS              1
>  MAX_TRANS              255
>  INITIAL_EXTENT         26357760
>  NEXT_EXTENT            20807680
>  MIN_EXTENTS            1
>  MAX_EXTENTS            249
>  PCT_INCREASE           5
>  FREELISTS              1
>  FREELIST_GROUPS        1
>  BACKED_UP              N
>  NUM_ROWS               29030
>  BLOCKS                 113384
>  EMPTY_BLOCKS           1075
>  AVG_SPACE              3936
>  CHAIN_CNT              135
>  AVG_ROW_LEN            183
>  DEGREE                 1
>  INSTANCES              1
>  CACHE                  N
>  TABLE_LOCK             ENABLED

<snip>

Something to remember about pctfree and pctused and free lists, although
this may not be the problem you are having.  However, the symptoms are
the same.

Oracle will attempt to insert a row into a block that is on the free
list.  If, in inserting the row, the free space in the block would fall
below the pctfree parameter, the row isn't inserted.  Here's the
kicker:  It now checks to see if the block is at least PCTUSED full.
If it is, it takes the block off the free list.  If it isn't, it leaves
the block on the free list for more insert attempts.  Normally this
works fine, but the following scenario will cause a table to continually
allocate new extents and not reuse old blocks, even if they are empty:

pctfree is 10  pctused is 85  4K block is 80% full.  new row is 800
bytes (20%).  row cannot be inserted because pctfree is 10.  But the
block isn't 85% used yet, so it stays on the free list.  If every new
row was 800, you would never remove new blocks from the free list, and
Oracle only searches for new blocks for so long before allocating a
new extent (for performance reasons).  I believe this happened to us
on a table similar to yours, and it continually grabbed new blocks and
extents and would never reuse old blocks, even after they had been
totally emptied.  This typically happens when pctused and pctfree add
up close to 100, but with your large variable row size it MIGHT be
applicable here as well.

You might also consider inserting the row with a fixed, known size so
you can control the number of rows inserted into each block, then
update the long column to what it's supposed to be.
--
---
Allen Kirby                     AT&T ITS Production Services

 
 
 

Problem with table size

Post by Ken Eato » Sat, 24 May 1997 04:00:00


I think you have received some good input from Allen and*.  Your
PCTFREE doesn't appear to be much of a problem.  I would try upping your
PCTUSED.  If I am reading your note correctly, you have a lot of fairly
small rows (avg len=183). PCTUSED determines when a block is placed back
on the freelist.  A higher value means it will be available sooner.  If
your inserts and deletes are likely to be uniformally spread across your
blocks, you will have many blocks that never get back on the freelist
even though there is room for rows.  You can run a report to see how
many blocks you are currently using at two time points to find the rate
of increase in your table.  Once you increase the number, it will apply
to all blocks in the database, but not until an operation occurs on that
block (like a row deletion). Gauge it for a similar time period and see
if the blocks increase more slowly...  Dick suggested placing this table
in its own tablespace which is a good idea.  You may be doing that
already.  I would recommend reading Cary Millsap's paper on 'Oracle7
Server Space Management' as there are a number of pointers in there that
may help.  He has formulas for setting storage paramenters and
recommends setting Initial=Next & PCTIncrease=0.  He also has a
discussion on Exporting and Importing.  You should try Exp/Imp using
*'s suggestion to get yourself back in line periodically.  If you do
the Export/Import, make sure your storage parameters are where you want
them after the import... ie. Cary Millsap recommends against the
Compress=Y option that resets storage parameters from your tuned
settings.

 
 
 

Problem with table size

Post by Will Kooima » Fri, 30 May 1997 04:00:00


Your table continues to increase in size because you are doing
delete/insert logic.  If you change your code to update/insert logic
your problem should go away.

Ex:

delete from text_tbl where message_id = :h_message_id;

insert into text_tbl
  (message_id, ..., message_text, version)
values
  (:h_message_id, ..., :h_message_text, :h_version);

Should be

update text_tbl
set message_type = :h_message_type,
    originator   = :h_originator,
    .
    .
    message_text = :h_message_text,
    version      = :h_version
where message_id = :h_message_id;

if ( sqlca.sqlerrd[2] == 0 )
{  insert into text_tbl
      (message_id, ..., message_text, version)
   values
      (:h_message_id, ..., :h_message_text, :h_version);

Quote:}

The problem has to do with Oracle not updating the high water mark when
doing inserts or deletes.  I've seen this problem a million times, and
it has always been corrected by using update/insert logic.

The LONG column is probably responsible for exaggerating the effects of
using delete/insert logic.

You should also compress the table after making the code change.

Will.
--
======================================================================
Will Kooiman                        Computer Systems Authority
Systems Consultant                  6380 LBJ Freeway, Suite 181
(972) 960-0180 x236                 Dallas, TX 75240

 
 
 

1. urgent problem re table size

Hi all,

I am running SQL Server 7 on an NT box.

Last night I ran a job which added an integer column to a table. The table
contains approx 620000 rows so with the storage size of an int being 4 bytes
I assumed that the table would grow approx 2.5MB.

The job was scripted as follows and run succesfully.
==========
begin transaction
 set quoted_identifier on
 set transaction isolation level serializable
commit

begin transaction
 create table dbo.Tmp_MemRef
 (
    note_id      int                          not null identity (1, 1)
   ,member     varchar  (0255)      null
   ,refdate      datetime                 null
   ,ref             varchar  (0255)      null
   ,comment   text                        null
)  on [primary]
 textimage_on [primary]

set identity_insert dbo.Tmp_MemRef off

if exists
 (select *
  from   dbo.MemRef)
   EXEC('INSERT INTO dbo.Tmp_MemRef (member, refdate, ref, comment)
   SELECT member, refdate, ref, comment FROM dbo.MemRef TABLOCKX')
drop table dbo.MemRef
execute sp_rename N'dbo.Tmp_MemRef', N'MemRef', 'OBJECT'
create clustered index IX_MemRef on dbo.MemRef
 (
   member
 ) with fillfactor = 96 on [primary]
commit
==========

However the table has nearly doubled in size from approx 500 MB to 940MB
(this info has been gathered using sp_spaceused).  As a result of this
unexpected growth I know have disk space issues due to the size of my
log,differential and full backup devices also growing. (In fact my full
backup failed last night I am assuming that this is due to me running out of
disk space the error in the log is  as follows

/*
2001-11-15 02:51:51.48 kernel   BackupMedium::ReportIoError: write failure
on backup device 'd:\mssql7\backup\membernotes_full_backup_device.bak'.
Operating system error 112(error not found).
2001-11-15 02:51:51.53 kernel   BackupSoftFile::WriteMediaFileMark:  failure
on backup device 'd:\mssql7\backup\membernotes_full_backup_device.bak'.
Operating system error 112(error not found).
*/

The questions I have are:

a) Why did the table size grow so much and is there anything I can do to
reduce it.
b) Why did my backup fail - is it due to running out of disk space?

regards and thanks

Andrew

2. HOW TO CREATE CALCULATED MEASURE

3. Problem with table size

4. Automatic product installation

5. HELP URGENT : Table size in MMC is bigger than real size (wiewed with SQL insider)

6. Job Positions: AI / Web Developers

7. SQL SERVER Table size and Index size query

8. Gateway to Oracle!

9. URGENT : Table size in MMC is bigger than real size (wiewed with SQL insider)

10. Table size estimates and column sizes

11. Rollback segment size versus table size