Large deletes/inserts against large tables....

Large deletes/inserts against large tables....

Post by Mike Rile » Tue, 15 Sep 1998 04:00:00



We've got an 18million row tbl that we insert anywhere from 25k to 100k
rows per night, and that on a nightly basis we delete (and archive
approximaately) between 500k and 700k rows....

At one time we could insert or delete approx 100k rows per hr, but now
performance has degraded to approx 10k rows per hr.

The table has a clustered index, and 4 other indices that have existed
in the table since it was created.  The base-table itself occupies
approx 5GB of disk.

Does anyone have any ideas, or know of any documentation that we could
look at to understand what is happening, how to fix it, and how to
systematically monitor the table for such problems?

thanks in advance, mike

 
 
 

Large deletes/inserts against large tables....

Post by Mary Owen » Tue, 15 Sep 1998 04:00:00


Sql Power Tools
12 Barnfield Ct.
Upper Saddle River, N.J. 07458

Contact Person: Mary Owens             September 8, 1998

FOR IMMEDIATE RELEASE:

Cyrano Competitive Upgrade Offer from Sql Power Tools
-----------------------------------------------------

Cyrano competitive upgrade offer

  - 40% discount on the Sql Power SniFFFer
                        enterprise version !!!

1. Want 7x24 non-intrusive SQL statement capture and
   performance analysis without having to route end-users
   thru open server middleware that slows user connections
   and the SQL transaction flow to the database server?

2. Want to perform 7x24 non-intrusive database server
   SQL statement or show plan performance analysis?

3. Want to capture, monitor and analyze SQL transaction
   performance for high volume production database servers?

4. Want these benefits with NO middleware (Open Server)
   installation issues, NO changes to user workstations,
   NO changes to the database servers being monitored,
   and NO database server performance degrading agents,
   probes, NO intrusive database server auditing or
   SQL traces?

5. Want NO resource consumption on the data base server,
   network or client workstations!  Product CPU time,
   memory, disk, I/O, network traffic on the data base
   server, network or client workstations can be 0 with
   the Sql Power SniFFFer!

6. Do NOT want to update every client workstation's 'sql.ini'
   or 'interfaces' file for connectivity to an open server
  (not an easy task for 50, 200, or 1000+ end-users).

7. Do NOT want to require every end-user to connect to an
   open server (middleware) and then pass every SQL
   transaction through an open server.

   ** This delays the end-user connect time, delays the
      SQL transaction, plus system thruput is possibly
      constrained by the architecture and efficiency of
      the middleware.

   ** Worse, if the middleware fails - all end-users are
      disconnected from the database server.

8. Want an economical solution that does not cost tens of
   thousands of dollars for a single copy?

9. Want to be able to deploy an Enterprise solution to
   SQL transaction capture, monitoring and analysis for
   all development, test and production database servers,
   Data Marts and WWW servers?

Then you NEED the SQL Power SniFFFer !!!

The SQL Power SniFFFer is a powerful, non-intrusive
SQL performance monitoring product for 7x24 production
databases, Data Marts, WWW and Year 2000 SQL performance
analysis.  Available for 95, NT and UNIX.

The Sql Power SniFFFer product reverse engineers the
SQL network packet flow to the database server.  The
product does not have to run on the database server,
nor on any client work stations connected to the
database server.  Installation is a few minutes.

The Sql Power SniFFFer requires no database vendor
DB-Library, CT-Library, ODBC communications software.
Likewise it does not need any intrusive middleware
in the form of application servers or Open Server
software.

The Sql Power SniFFFer captures, monitors and analyzes
the performance all database SQL transactions - 7x24.

Contact Sql Power Tools IMMEDIATELY for the
competitive upgrade offer details and qualification
requirements for upgrade to the Sql Power SniFFFer
enterprise edition.

Mary Owens

Sql Power Tools
12 Barnfield Ct.
Upper Saddle River, NJ  07458 USA

www.sqlpower.com
201.825.9511

Any copyrights or trademarks used are those of the respective company.

 
 
 

Large deletes/inserts against large tables....

Post by Gerald Whyt » Tue, 15 Sep 1998 04:00:00


If  your system is slow in finding the records to insert/delete then Update
Statistics may improve this.  If the system is slow doing the updates then
re-building the clustered and other  index with a fill factor of 50% should
allow for the high change rate of your table.  How often are you committing
on your transactions?

> We've got an 18million row tbl that we insert anywhere from 25k to 100k
> rows per night, and that on a nightly basis we delete (and archive
> approximaately) between 500k and 700k rows....

> At one time we could insert or delete approx 100k rows per hr, but now
> performance has degraded to approx 10k rows per hr.

> The table has a clustered index, and 4 other indices that have existed
> in the table since it was created.  The base-table itself occupies
> approx 5GB of disk.

> Does anyone have any ideas, or know of any documentation that we could
> look at to understand what is happening, how to fix it, and how to
> systematically monitor the table for such problems?

> thanks in advance, mike

 
 
 

Large deletes/inserts against large tables....

Post by Roger G. Abbot » Tue, 15 Sep 1998 04:00:00


Can you run sp_sysmon during each phase of the run and post the output?
Also what do your OS stats say?

> We've got an 18million row tbl that we insert anywhere from 25k to 100k
> rows per night, and that on a nightly basis we delete (and archive
> approximaately) between 500k and 700k rows....

> At one time we could insert or delete approx 100k rows per hr, but now
> performance has degraded to approx 10k rows per hr.

> The table has a clustered index, and 4 other indices that have existed
> in the table since it was created.  The base-table itself occupies
> approx 5GB of disk.

> Does anyone have any ideas, or know of any documentation that we could
> look at to understand what is happening, how to fix it, and how to
> systematically monitor the table for such problems?

> thanks in advance, mike

 
 
 

Large deletes/inserts against large tables....

Post by Herv » Wed, 16 Sep 1998 04:00:00



> We've got an 18million row tbl that we insert anywhere from 25k to 100k
> rows per night, and that on a nightly basis we delete (and archive
> approximaately) between 500k and 700k rows....

> At one time we could insert or delete approx 100k rows per hr, but now
> performance has degraded to approx 10k rows per hr.

> The table has a clustered index, and 4 other indices that have existed
> in the table since it was created.  The base-table itself occupies
> approx 5GB of disk.

> Does anyone have any ideas, or know of any documentation that we could
> look at to understand what is happening, how to fix it, and how to
> systematically monitor the table for such problems?

> thanks in advance, mike

Hi,
Right clustered index fillfactor should be at 50% for inserting
operation.
To try to find these problems you need to investigate as follow :
    -check your checkpoint parameter.
    -change your commit statement ( every 1000 rows inserted or deleted)
    -increase the tempdb size and your log database.
    -delete the 4 other indexes before inserted or deleted (if possible)
    -don't use a cluster index for the PK, use a unique non-clustered
    index and put it on a separate disk (use fast SCSI disk rather RAID
disk).
    -split the large table on two part, one for the inserted data and
    the other for the deleted (if possible).
I hope that helps.

Cheers

Herve Meftah
IT consultant

 
 
 

Large deletes/inserts against large tables....

Post by news.deshaw.co » Wed, 16 Sep 1998 04:00:00


Try with Table Partitioning and parallel BCP option(s)... While creating
clustered indices use ...consumers= ....
in create clustered index..... statement.
-chowdari



Quote:> We've got an 18million row tbl that we insert anywhere from 25k to 100k
> rows per night, and that on a nightly basis we delete (and archive
> approximaately) between 500k and 700k rows....

> At one time we could insert or delete approx 100k rows per hr, but now
> performance has degraded to approx 10k rows per hr.

> The table has a clustered index, and 4 other indices that have existed
> in the table since it was created.  The base-table itself occupies
> approx 5GB of disk.

> Does anyone have any ideas, or know of any documentation that we could
> look at to understand what is happening, how to fix it, and how to
> systematically monitor the table for such problems?

> thanks in advance, mike