| Oracle FAQ | Your Portal to the Oracle Knowledge Grid | |
Home -> Community -> Usenet -> c.d.o.server -> Re: Full scans of table bigger than buffer cache.
On Tue, 24 Apr 2001 21:59:53 +0100, "terry norman" <scott_at_tiger9.freeserve.co.uk> wrote:
>Dear all,
>
>This question arose a long time ago in this ng when I was a very young DBA
>and someone kindly answered it but I can't remember what the answer was! If
>you do a full table scan on a table that's say three times as big as the
>buffer cache, does Oracle limit the number of blocks that get used for that
>scan, to leave some for others, or does it just take up all the buffer space
>it can thereby depressing everybody else's hit rate? What would be the
>effect of setting the table as "cache"? Is there any point is using enough
>buffer cache to get the whole table in, o.s. paging permitting? In this
>particular case the table is about 15% of the data volume. I'm trying to
>minimise full scans, but they still occur, sometimes a few together.
>
>TIA for any advice / explanation.
>
>Regards,
>Terry.
>
When a block is not in cache, it is not in cache and it will be
retrieved. Your full table scan is definitely going to force everyone
else to suffer. Also changes are your own blocks have been paged out,
because some other process got a block read in and Oracle needs to
retrieve it again.
The effect of setting the table as cache would be minimal, as blocks
are treated in the same fashion as in full table scan: they are placed
on the lower end of the LRU list and so immediately paged out.
I don't think throwing more memory is going to help, as there is no
guarantee at all the data will be in buffer cache permanently,
Regards,
Sybrand Bakker, Oracle DBA Received on Tue Apr 24 2001 - 17:11:45 CDT
![]() |
![]() |