Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: FLUSHING SELECTED data FROM MEMORY
The method I usually use is to create a dummy table larger
than the db_block_buffers, with an index, then run a select
statement to read in every block of the table through the index.
set pctfree = 99 and you only need one row per block (in most cases), create table as select obj#, rpad(name,255) from sys.obj$; create index on obj# etc.select /*+ index */ max(name) from table where obj# >0
To check that you are flushed, look at x$bh (if you have sys rights) for dbafil = file number (in v7 - there is a different column used in v8) of critiical table, dbablk for block ids of critical table.
If you put the dummy table in its own file, then the easiest check is that dbafil,count(*) show a few blocks from file 1, and the rest from the dummy table's file.
Jonathan Lewis
Yet another Oracle-related web site: www.jlcomp.demon.co.uk
chandrasekar_at_my-dejanews.com wrote in message
<79jg4m$d28$1_at_nnrp1.dejanews.com>...
>Hi all, I am performing a ORACLE benchmark on the time taken to retrive
data
>from the database . To do that , I need to run a lot of select statements
in
>a particular table which has 200 columns and 4000 rows with a record size
of
>20k.
>
> Since the tests need to be conducted for many times , I need to flush the
>selected value from the data base buffers to avoid cache hits . Is there a
>way to flush all the selected values from the database buffers without
>shutting down the database . Can checkpointing the database be an ideal
>solution to avoid cache hits . Please help ...............
>
Received on Sun Feb 07 1999 - 03:29:31 CST