Deleting Large number of Rows

Deleting Large number of Rows

Post by Peter Stryjewsk » Fri, 19 Feb 1999 04:00:00



How can I quickly delete large numbers of rows?

Currently, I have a list of "index" values (that are a retreival key).

I "pipe" these commands into a sqlplus session.

delete from <table> where <index> = value;

after 5000 deletes, I put in a commit;

These aren't really grouped together, so each one is a separate delete
statement.

Pete Stryjewski

 
 
 

Deleting Large number of Rows

Post by Neil Ma » Sat, 20 Feb 1999 04:00:00


Would be much better if you could process multiple rows in a single SQL
statement, then commit. Maybe you could write a pl/sql block that opens
a cursor on the table that has the list of rows to delete, then commit
every 500 or something. Take advantage of array processing.

If you get smarter, you can divide the deletes up and batch off several
processes deleting rows concurrently.

If you are deleting a lot of the table, you could drop the indexes, do
the delete (driving of the delete table), then rebuild the inndexes. As
you will find deleting the index values will take much longer than
deleting the rows.

Another alternative is to copy the rows you are NOT deleting to a new
table, then drop the original table and rename the temp table.

Andrew.


> How can I quickly delete large numbers of rows?

> Currently, I have a list of "index" values (that are a retreival key).

> I "pipe" these commands into a sqlplus session.

> delete from <table> where <index> = value;

> after 5000 deletes, I put in a commit;

> These aren't really grouped together, so each one is a separate delete
> statement.

> Pete Stryjewski



 
 
 

Deleting Large number of Rows

Post by MStout24 » Sun, 21 Feb 1999 04:00:00


Hi,

    If you are trying to get rid of all the rows, use truncate.

    Another way to do batches is:

    delete from <table> where rownum <= 5000;
    commit;

    Use that in place of each group.

Mike Stout

 
 
 

1. Deleting large numbers of rows

Our application records events as they occur in "real time".  We record
approximately 200 thousand transactions per day (or more) and keep 9 months
of storage online for retrieval.  When these transactions age beyond the 9
months, I copy them to a flat text file and then delete the rows from the
database.

My problem is this:  If I run a script of the nature:


I get an error where the server complains that it is out of locks.  To
overcome this complaint, I modified the script to do:

WHILE ( SELECT COUNT(*) FROM History

) > 0
DELETE FROM History WHERE History.id IN (
    SELECT TOP 100 T1.id FROM  History T1

)

The id column in the history table is the primary index and an IDENTITY
column.  This certainly gets me by the lock problem, but now it takes days
to delete a months worth of transactions.

Does anyone have a quicker method of deleting a large number of rows quickly
and without lock errors?

Many thanks in advance,
Gary Johnson

2. Unidata ObjectCall

3. Deleting Large Number of Rows

4. ingres faq

5. Deleting large number of records

6. national caracters

7. Optimizing delete of large number of records

8. Tips for Upgrading to Enterprise Edition?

9. Deleting Large number of records from an Oracle table

10. deleting large number of records from table...

11. Inerting large number of rows?

12. Database design - large numbers of rows?

13. Looking for advice: Retrieving large number of rows from SQL, to a web client