As an aside, for such a large bcp in job, is there any reason why you
didn't partition the table and do the bcp in multiple parallel
sessions?
-bret
First - check your exact version of ASE and bcp. Check if you need
an EBF.
Second - do you have triggers and are they disabled?
Third - what about your devices?
Fourth - monitor with sp_sysmon and check the system's performance
too.
Fifth - what's your batch size? And packet size (since you're using
text)?
Sixth - have you done any server config tuning for bcp?
-am ? 2002
1) Make certain that the 'select into/bulkcopy/pllsort' option is set
- use sp_dboption <db> 'select', true and then checkpoint
database.
2) if the batchsize is large set the 'truncate log on chkpt' option.
- rule 0f thumb (batch size = 960*<rows per page> * <# of
pages in the # of pre-allocated extents you have>
3) use the 'A' option in bcp to set the packet size.
- you have to play around with the 'additional network
mem' at
<#parallel bcps> * <packet size> * 3 and the
'max network packet size' configuration as well.
(default is 512).
4) 'configure the # of pre-allocated extents'
Some of these you may only want for the bcp as some have trade-offs
between the perfomance of the bcp and the performance of indices.
Take a look at the data: is there more text at the end of the table.
4 million rows on a good server should take no more than 30-40 minutes
with the data that you are describing here. Check the log and the
options that are set here. Is the bcp from the local machine? don't
bother with the update on stats until after the load. Also, this is
overlooked at times. Make sure there is at least 2 times the amount
of free space in the database that you are putting in. This will help.
Hope this helps.
Eric
What I found intriguing about the problem is that it runs slower andQuote:> There are a few things that can cause the bcp to slow. Of course
the
> obvious is the indices and the triggers but you said that both of
> those are not present.
> Take a look at other things like,
> << stuff snipped >>
1. Unique Index gets slower and slower and slower for inserts
Hi,
If I have a table with a unique index/constraint on a GUID column (which,
includes using it as a primary key), then inserts to the table get progressively
slower and slower as more records are added.
However, if I specify that the index is non-unique, then this slowing effect is
either not noticeable, or not present at all.
Is it normal for the slowing, and is there anything I can do about it, other
than not have the unique constraint on the index?
Cheers,
Steve.
Remove the 'NOSPAM.' to send me e-Mail
5. DB Getting Slower and Slower and Slower....
7. Slow, slow, slow search on VB6, ADO and SS7
9. Insert into SQL Server slow, slow, slow...
10. PDOXWIN5 Slow (slower than slow)
12. why is update statistics getting slower and slower?
13. PFW4.5 running slower and slower