Migrating Databases w/ 2GB Filesize Limit

Migrating Databases w/ 2GB Filesize Limit

Post by Doug Hay » Sun, 31 Dec 1899 09:00:00



 When we try to unload large tables to UNIX flat file, the process
aborts with AIO error -27 (i.e., O/S file size exceeded).  AIX 4.3.2
uses 64-bit addressing so there is no 2GB file size limit.  We realize
Informix has a 2GB file limit when archiving to tape, but not to a
file system. High-Performance Loader also bombs when unloading to a
single file in the device array.  However, when unloading in parallel
to several files (each remaining under 2GB), it works fine. The
database supports a PeopleSoft implementation, therefore we cannot
remodel the data design.  Our ultimate goal is to archive the
production database in order to refresh development, migration and
test databases. It  consists of roughly 10,000 tables, several of
which are over 2GB.

Basically, High-Performance Loader is out of the question since
mapping flat files to table schema for a load job would be too labor
intensive for a database this size.  However, if someone has ideas how
to automate the mapping through scripting to the onpload metadata
database directly, we would appreciate some advice.  We have looked at
HDR (High-Availability Data Replication) as well but this is a
server-level utility.  We need to work at the database level in order
to change table ownerships in the unload files (required by PeopleSoft
for various purposes).

We have also had no success with onunload, onbar and ontape.

Thanks in advance for any help rendered!

Doug Hayes
Database Administrator, Senco Fasteners

 
 
 

1. 2gb filesize, large disks and splitting tablespaces

A post I just saw of Niall's reminded me of something. I'm on
Solaris7/Oracle 816 (Sparc) and remember the dire warnings in days of yore
about unix 'largefiles', ie. files over 2gb in size.

ps. Volume Manager 3.0.4, VXFS 3.3.2

I know Solaris7 has support for them, and I (think I) know Oracle is fine
with it, but in this combination? Yay? Nay? Eh? Horror stories? Success?

In fact, what is the current thinking these days regarding large disks?
Reason I ask is at the moment, the array is populated with 4gb disks (!),
but the new array coming is 18gb disks.

As an *example*: suppose I have an 18gb disk and at the moment, I have 3x2gb
files making a 6gb tablespace. Assuming *just for the sake of argument* I
had to put these on the one disk, would you still place them in 3 files on
that disk or in one file?

Any advantages one way or the other d'ya think? Such a thing as 'datafile
header contention'? Are the db_writer's more 'intelligent' with multiple
files?

Discuss....

Andrew :o)

2. Stuff

3. Setting global filesize limit for oracle database on Digitail Unix 3.2

4. Sample Ado database functions

5. MaximDevice limit for SQl Server 11-11.5 (was: Sybase 2Gb Limit)

6. Dependencies

7. SQL Server Actual Filesize Limit

8. Blank or null

9. Filesize Limit

10. export filesize limit 2G question

11. Onload filesize limit

12. DataSpace Filesize limit???

13. FileSize limit for Modify File Command?