What optimisation should I use?

What optimisation should I use?

Post by Steve Holdow » Fri, 02 Aug 1996 04:00:00



Dear All,

We are developing a telecomms application which requires very fast
access to a single table. This table contains about 3.5 million rows
of about 750 bytes.

There is a definite profile to the useage of these records... some can
be accessed hundreds of times more often than others. In order to get
the best performance from this table, we have allocated an SGA of
about 3/4 Gig in which to keep those most often used.

What I'm not sure about is how Oracle decides which rows to keep in
memory, as obviously we can't fit all of them in! Should I enforce a
cost based optimisation, or does this only decide on the best way to
join tables together. In other words, what can this optimiser do with
the statistics it saves?

TIA

Steve
EX Ingres DBA!

 
 
 

What optimisation should I use?

Post by Danny Roosen » Sat, 03 Aug 1996 04:00:00


Just an idea,

since speed seems to come before storage.

Perhaps it would be nice to count for each record how many times it was selected.
On a regular basis, a small(er) table could be filled with the most requested records.

Your application could then first check this table (very fast) and if the search fails
search the bigger table.

In a way this is a simulation of what happens internally, but it could yield good
results.

If you would choose the consider this please let me know.

 
 
 

What optimisation should I use?

Post by Louis Coh » Sat, 03 Aug 1996 04:00:00


Oracle uses a most-recently-used alogorithm for the buffer cache,
i.e., when it needs more space in the cache, it bumps the stuff that
hasn't been used for a while.

SO, your frequently accessed row should end up in the cache.

Depending on how these rows are accessed, you might consider using a
hash key rather than a conventional index.


>Dear All,
>We are developing a telecomms application which requires very fast
>access to a single table. This table contains about 3.5 million rows
>of about 750 bytes.
>There is a definite profile to the useage of these records... some can
>be accessed hundreds of times more often than others. In order to get
>the best performance from this table, we have allocated an SGA of
>about 3/4 Gig in which to keep those most often used.
>What I'm not sure about is how Oracle decides which rows to keep in
>memory, as obviously we can't fit all of them in! Should I enforce a
>cost based optimisation, or does this only decide on the best way to
>join tables together. In other words, what can this optimiser do with
>the statistics it saves?
>TIA
>Steve
>EX Ingres DBA!

Regards

Louis Cohen
San Leandro, CA

 
 
 

1. (ORACLE 7.1.3) Running out of temporary tablespace using Cost based optimization

         I am a DBA/SWE in AdValue Media Technologies.  I am

having some problem in performing a big query(three at the same time)

 I run out of temporary tablespace. I have 140M temporary tablespace

each extent is 5M.  The databse is about 1.2G.

 This does not happen in Rule based optimization.

        My address is:

        uunet!advalue!manoj

        Thanks

        Manoj Saxena

2. SIGGRAPH 96

3. ConText Optimization

4. Booting OpenVMS Alpha as a satellite

5. Memory optimization, help!

6. Wanted: Used HP workstation

7. SQL*Forms Trigger Optimization

8. Friend ... ...

9. sql optimization (Orac. 6)

10. Query optimization question

11. SQL Optimization

12. A New Query Optimization Strategy

13. Query Optimization with Oracle