Tempdb can't handle large bulk insert operations

Tempdb can't handle large bulk insert operations

Post by David J. Berma » Wed, 25 Jun 2003 22:36:25



Hello, I'm having trouble with my tempdb, perhaps someone can help?

I have a database with 30 M records on SQL server ORG.  I want to insert
those values into a table on linked SQL server DEST.  I can't even get
through 10,000 records - the query fails saying my tempdb is full.  The
thing is, there is 10 GB free on the drive with the tempdb on it and 70 GB
free on the drive where the database files are.  What's worse, is that even
though I specified that the tempdb should be able to grow dynamically, it
doesn't seem to be growing at all.

Is there a way that I can complete this import?  Each row is actually pretty
small, less than 255 bytes.  Can I bypass transaction log or something?  If
its not the tempdb then I run out of room in the transaction log.  It takes
45 min to 2 hours each time my insert fails.

I'm trying as an alternative approach using a curser to move one record at a
time, and also a while loop using select top 1 to grab one record at a time
and move it (insert into DEST then delete from ORG).  However, these still
seem to wrap the operation into a transaction, even with set
IMPLICIT_TRANSATION OFF.

Thanks!

Frustrated!

david_berman(at)bose.com

 
 
 

Tempdb can't handle large bulk insert operations

Post by Dave » Thu, 26 Jun 2003 03:18:30


Hi David

See my response to your other post regarding the transaction log.  As
for tempdb, what are you doing that would even use it?  How are you
inserting your records?  Can you post your code?

David Simard


Quote:> Hello, I'm having trouble with my tempdb, perhaps someone can help?

> I have a database with 30 M records on SQL server ORG.  I want to insert
> those values into a table on linked SQL server DEST.  I can't even get
> through 10,000 records - the query fails saying my tempdb is full.  The
> thing is, there is 10 GB free on the drive with the tempdb on it and 70 GB
> free on the drive where the database files are.  What's worse, is that even
> though I specified that the tempdb should be able to grow dynamically, it
> doesn't seem to be growing at all.

> Is there a way that I can complete this import?  Each row is actually pretty
> small, less than 255 bytes.  Can I bypass transaction log or something?  If
> its not the tempdb then I run out of room in the transaction log.  It takes
> 45 min to 2 hours each time my insert fails.

> I'm trying as an alternative approach using a curser to move one record at a
> time, and also a while loop using select top 1 to grab one record at a time
> and move it (insert into DEST then delete from ORG).  However, these still
> seem to wrap the operation into a transaction, even with set
> IMPLICIT_TRANSATION OFF.

> Thanks!

> Frustrated!

> david_berman(at)bose.com


 
 
 

Tempdb can't handle large bulk insert operations

Post by David J. Berma » Thu, 26 Jun 2003 04:30:13


Hi Dave,
    Thanks so much for your help.  I'm using linked server, so from the
destination database server DEST my query looks like this:
(forgive me if my SQL coding stinks but I still have so much to learn!)

Here is what I was trying to do that made such a mess of my tempdb (don't
know why!!!):
(This isn't my exact exact query because I can't access that other machine
from where I am right now, but
  this is almost exactly it.  The query would run but only could succeed if
there was MASSIVE space avail for temp storage,
  otherwise I get a lack of space in device for tempdb error when theres 60+
gb free on the drive).
    insert into email_subscribers (EmailAddress, OptInDate, Origin)
            select email, (select GetDate() as OptInDate), 'WEB SITE
SUBSCRIBE - NOT SPAM' as Origin
        from ORG.import.dbo.emailaddrs
                where not exists email in (select EmailAddress from
email_subscribers where EmailAddress <> email)

----------  This solution kinda works but as you can guess is a little slow
as I have 30,000,000 records




-- Use join to filter out duplicates

                                 inner join email_subscribers as S on
S.EmailAddress <> email)

BEGIN
    insert into email_subscribers (EmailAddress, OptInDate, Origin)

    -- Delete doesn't seem to work against the linked server for some reason
so I actually comment that out
    -- but my plan was to actually MOVE one record at a time


                                 inner join email_subscribers as S on
S.EmailAddress <> email)
END


> Hi David

> See my response to your other post regarding the transaction log.  As
> for tempdb, what are you doing that would even use it?  How are you
> inserting your records?  Can you post your code?

> David Simard




Quote:> > Hello, I'm having trouble with my tempdb, perhaps someone can help?

> > I have a database with 30 M records on SQL server ORG.  I want to insert
> > those values into a table on linked SQL server DEST.  I can't even get
> > through 10,000 records - the query fails saying my tempdb is full.  The
> > thing is, there is 10 GB free on the drive with the tempdb on it and 70
GB
> > free on the drive where the database files are.  What's worse, is that
even
> > though I specified that the tempdb should be able to grow dynamically,
it
> > doesn't seem to be growing at all.

> > Is there a way that I can complete this import?  Each row is actually
pretty
> > small, less than 255 bytes.  Can I bypass transaction log or something?
If
> > its not the tempdb then I run out of room in the transaction log.  It
takes
> > 45 min to 2 hours each time my insert fails.

> > I'm trying as an alternative approach using a curser to move one record
at a
> > time, and also a while loop using select top 1 to grab one record at a
time
> > and move it (insert into DEST then delete from ORG).  However, these
still
> > seem to wrap the operation into a transaction, even with set
> > IMPLICIT_TRANSATION OFF.

> > Thanks!

> > Frustrated!

> > david_berman(at)bose.com

 
 
 

1. Error when bulk insert follows another large bulk insert

I am running a job out of MS SQL Enterprise Manager where the first step
turns off transaction logging, the 2nd step is a large bulk insert step,
and the 3rd step is another large bulk insert step.  I receive the
following error when executing the 3rd step.  Is this error occurring
because the database is still comitting data from the previous large bulk
insert at the same time as it is performing the next bulk insert?  If so,
is there a workaround?  Thank you for any help you can provide.

Error message from Step 3 (2nd bulk insert):
Backup, CHECKALLOC, bulk copy, SELECT INTO, and file manipulation
(such as CREATE FILE) operations on a database must be serialized.
Reissue the statement after the current backup, CHECKALLOC, or
file manipulation operation is completed. [SQLSTATE 42000] (Error 3023)  
The statement has been terminated. [SQLSTATE 01000] (Error 3621).  
The step failed.

--
Posted via CNET Help.com
http://www.help.com/

2. INFORMIX/C/UNIX CONTRACT ASSIGNMENT ON SITE WITH CLIENT 6 MONTHS

3. can't handle large inserts into table with Access ODBC

4. MSDE or JET

5. having a prob with bulk insert operation

6. Need a search engine plug-in for VB5

7. Is there a need to rebuild indexes after bulk insert operation

8. sp_sysmon

9. Roll back multiple BULK INSERT operations

10. Performance of 'bulk' inserts

11. Bulk Insert OLE DB 'STREAM' error

12. Error handling with bulk insert.

13. Bulk Insert Generate lots of '??????'