overhead of "small" large objects

overhead of "small" large objects

Post by Denis Perchi » Tue, 12 Dec 2000 23:31:17

Quote:> > Is there significant overhead involoved in using large objects that
> > aren't very large?

> Yes, since each large object is a separate table in 7.0.* and before.
> The allocation unit for table space is 8K, so your 10K objects chew up
> 16K of table space.  What's worse, each LO table has a btree index, and
> the minimum size of a btree index is 16K --- so your objects take 32K
> apiece.

> That accounts for a factor of 3.  I'm not sure where the other 8K went.
> Each LO table will require entries in pg_class, pg_attribute, pg_type,
> and pg_index, plus the indexes on those tables, but that doesn't seem
> like it'd amount to anything close to 8K per LO.

> 7.1 avoids this problem by keeping all LOs in one big table.

Or you can use my patch for the same functionality in 7.0.x.
You can get it at: http://www.perchine.com/dyp/pg/

Sincerely Yours,
Denis Perchine


HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5


1. overhead of "small" large objects


I'm putting lots of small (~10Kb) chunks of binary seismic data into large
objects in postgres 7.0.2. Basically just arrays of 2500 or so ints that
represent about a minutes worth of data. I put in the data at the rate of
about 1.5Mb per hour, but the disk usage of the database is growing at
about 6Mb per hour! A factor of 4 seems a bit excessive.

Is there significant overhead involoved in using large objects that aren't
very large?

What might I be doing wrong?

Is there a better way to store these chunks?


2. patch or pluin for FMP4 controll relaysd via com port

3. storing "small binary objects"

4. Computed fields in Sybase/SQL...???

5. Question about packing !

6. "removing" replication overhead

7. IpcMemoryCreate: shmget failed (Invalid argument)

8. max of ("...","...","..")

9. Error "Data field too small"

10. Tuning "INSERT" for small table

11. "small" Dateproblem

12. Error "Data field tto small"