Processing Some by Not All records

Processing Some by Not All records

Post by Greg Larse » Sat, 27 Apr 2002 02:21:44



How would one go about writing a DTS package that would
take a flat file and load a database table but selectively
takes records based on the data.

Currently I have a CSV Flat file that has basically say
10,000 records.  Daily I get a new flat file with a new
set of 10,000 records.  This flat file occassionally
contains some bad data on a handful of records, say 10-
20.  Is there a way to process the 10,000 records, and
skip the 10-20 records that are bad? The bad records are
easy to identify, since all the fields are blank.

 
 
 

Processing Some by Not All records

Post by Claus Busk Anderse » Sat, 27 Apr 2002 04:36:59


You could use the DTSTransformStat_SkipRow if you are using the datapump for
this task.

Regards
Claus


Quote:> How would one go about writing a DTS package that would
> take a flat file and load a database table but selectively
> takes records based on the data.

> Currently I have a CSV Flat file that has basically say
> 10,000 records.  Daily I get a new flat file with a new
> set of 10,000 records.  This flat file occassionally
> contains some bad data on a handful of records, say 10-
> 20.  Is there a way to process the 10,000 records, and
> skip the 10-20 records that are bad? The bad records are
> easy to identify, since all the fields are blank.


 
 
 

Processing Some by Not All records

Post by Allan Mitche » Sat, 27 Apr 2002 05:53:57


You could take the text file to SQL Server and then query that scratch table into your proper
destination

OR

in your transformations you could check for a field being "Bad" and have DTS return
DTSTransformStat_SkipRow

--Allan


>How would one go about writing a DTS package that would
>take a flat file and load a database table but selectively
>takes records based on the data.

>Currently I have a CSV Flat file that has basically say
>10,000 records.  Daily I get a new flat file with a new
>set of 10,000 records.  This flat file occassionally
>contains some bad data on a handful of records, say 10-
>20.  Is there a way to process the 10,000 records, and
>skip the 10-20 records that are bad? The bad records are
>easy to identify, since all the fields are blank.

Allan Mitchell
-----------------------
www.allisonmitchell.com
Visit the site for DMO and DTS code and articles
 
 
 

Processing Some by Not All records

Post by David Sheaffer » Fri, 03 May 2002 04:11:50


Greg,
  You could also enable an exception file and the max error count
properties of the transformation task.  The import will continue until the
max error count is reached and rows that fail to be imported will be
written to the exception file.


This posting is provided 'AS IS' with no warranties, and confers no rights.

 
 
 

1. Full Process does not process all the records in the Fact Table

Hello:

I have a cube that was populated from a SQL Server fact
table on Friday. This table had 816,000 rows.

This morning, I added some 170,000+ rows to this fact
table.

I choose Process/Full hoping that it will clear the cube
and import all 980,000+ rows of data.

But somehow it remembers the 816,000 rows and that all
the rows that it is importing into the cube.

If I choose Process/Reprocess, I get the same result.

If I choose Process/Incremental, I get the 816,000+ rows
duplicated.

Am I doing something wrong?

This is what I want to do:

Clear the cube and rebuild it with the 980,000+ rows of
data I have.

Thanks.

Venkat

2. mmc give runtime error in database.htm

3. Bulk Insert does not process all records

4. Get Fast Cash !!!

5. Error: Process ID %d is not an active process ID

6. Simple DBList question

7. Msg 6106,Sev 16: Process ID 67 is not an active process ID.

8. ODBC

9. Msg 6106,Process ID %d is not an active process ID.

10. Help: Processing partition does not process private dimension

11. DBGrid - before-record, after-record processing?

12. Record-by-record processing

13. SQL Question record by record processing