Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Strange behaviour
A few notes:
-- Vladimir Zakharychev (bob@dpsp-yes.com) http://www.dpsp-yes.com Dynamic PSP(tm) - the first true RAD toolkit for Oracle-based internet applications. All opinions are mine and do not necessarily go in line with those of my employer. "Carlos Alberto" <calberto2312_at_hotmail.com> wrote in message news:72954535.0206200526.506280d7_at_posting.google.com...Received on Fri Jun 21 2002 - 09:46:44 CDT
> Hi Vladimir,
>
> At first, thanks very much for your help. The batch SQL´s that I
> executed after this "small" table was generated had degraded its
> performance very much, about 2 or 3 times slower than before (with the
> "big" table). I think that just caching wouldn´t do this. On the other
> hand, when I created the indexes, I had already make "select
> count(1)", and then the datablocks were already cleaned out. Regarding
> Oracle Support (I called them), these index blocks in this situation
> doesn´t need to be cleaned out.
> I have made a test with "autotrace on" and "set timing on" in
> SQL*Plus yesterday with one SQL which access an index, and compared
> the big table with the small one. I´ve seen much more recursive calls
> and redo in the small table. How it could be possible? Here are the
> results :
>
> SMALL TABLE
>
> 1026 rows selected.
>
> real: 45650
>
> Execution Plan
> ----------------------------------------------------------
> 0
> SELECT STATEMENT Optimizer=RULE
> 1 0
> TABLE ACCESS (BY INDEX ROWID) OF 'PI_FACTURE_BKP1'
> 2 1
> INDEX (RANGE SCAN) OF 'PI_FACTURE4_BKP1' (NON-UNIQUE)
>
>
> Statistics
> ----------------------------------------------------------
> 1011 recursive calls
> 0 db block gets
> 1389 consistent gets
> 1048 physical reads
> 60 redo size
> 19623 bytes sent via SQL*Net to client
> 4832 bytes received via SQL*Net from client
> 70 SQL*Net roundtrips to/from client
> 5 sorts (memory)
> 0 sorts (disk)
> 1026 rows processed
>
>
> BIG TABLE
>
> 1712 rows selected.
>
> real: 36910
>
> Execution Plan
> ----------------------------------------------------------
> 0
> SELECT STATEMENT Optimizer=RULE
> 1 0
> TABLE ACCESS (BY INDEX ROWID) OF 'PI_FACTURE'
> 2 1
> INDEX (RANGE SCAN) OF 'PI_FACTURE4' (NON-UNIQUE)
>
>
>
> Statistics
> ----------------------------------------------------------
> 52 recursive calls
> 0 db block gets
> 1887 consistent gets
> 1438 physical reads
> 0 redo size
> 33596 bytes sent via SQL*Net to client
> 7838 bytes received via SQL*Net from client
> 116 SQL*Net roundtrips to/from client
> 2 sorts (memory)
> 0 sorts (disk)
> 1712 rows processed
>
> Thanks in advanced,
> Carlos
>
>
>
>
> "Vladimir M. Zakharychev" <bob_at_dpsp-yes.com> wrote in message
news:<aeqge1$rsk$1_at_babylon.agtel.net>...
> > This may or may not do with caching. First time you execute
> > a query, accessed blocks from both index and table may
> > had to be read from disk. Subsequent queries found them
> > in cache avoiding physical i/o. Also, when you created your
> > indexes, index blocks could need to be cleaned out in the
> > same fashion (and indexes on big tables are also big). And
> > since you analyzed table on 20% sample, majority of the
> > blocks were not touched and thus were not cleaned out by
> > analyze. If your data is more or less evenly distributed,
> > estimation over 20% sample may be adequate, but I would
> > recommend full compute statistics anyway - you spend more
> > time computing stats, but then you spend less time querying
> > (or maybe not :).
> >
> > --
> > Vladimir Zakharychev (bob@dpsp-yes.com) http://www.dpsp-yes.com
> > Dynamic PSP(tm) - the first true RAD toolkit for Oracle-based internet applications.
> > All opinions are mine and do not necessarily go in line with those of my employer.
> >
> >
> > "Carlos Alberto" <calberto2312_at_hotmail.com> wrote in message
> > news:72954535.0206190904.29f20231_at_posting.google.com...
> > > Hi,
> > >
> > > Just to post some more information about my problem : after the
> > > INSERT ... SELECT ... was committed, the "select count(1)" was
> > > executed, then the indexes were created, and then this new table was
> > > analyzed with a sample of 20 percent. The "delayed block cleanout"
> > > mechanism explained me why the first "select count(1) from
> > > SMALL_TABLE" was slow. But why some other SQL´s, which accesses some
> > > of the indexes of the table, were also slow when they were first
> > > executed?? At this time, the data blocks headers were already "cleaned
> > > out".
> > >
> > > Thanks in advanced,
> > > Carlos
> > >
> > > "Vladimir M. Zakharychev" <bob_at_dpsp-yes.com> wrote in message
> > news:<aepgsk$ifj$1_at_babylon.agtel.net>...
> > > > This is due to 'deferred block cleanout'. If your transaction affected
> > > > more than 10% of buffer cache, you are guaranteed to hit it. This
> > > > is a trick Oracle uses to make commit as short as possible. If the
> > > > transaction was large enough, Oracle simply does not clean out
> > > > every block it affected upon commit. Instead, this is done on first
> > > > query touching these blocks. Your select count(*) touched every
> > > > block of affected table, causing full block cleanout to happen, and
> > > > this involves generating a lot of redo and flushing blocks to disk.
> > > > If you had analyzed the table after inserting into it and before doing
> > > > any queries against it, analyze would've performed block cleanout
> > > > for you and at the same time would've gathered statistics CBO needs
> > > > to query your table most effectively. So, as a rule of thumb (if there
> > > > can be any rules), analyze your tables right after you perform massive
> > > > updates or inserts into them. This will both avoid the phenomenon
> > > > you witnessed and keep statistics current.
> > > >
> > > > --
> > > > Vladimir Zakharychev (bob@dpsp-yes.com) http://www.dpsp-yes.com
> > > > Dynamic PSP(tm) - the first true RAD toolkit for Oracle-based internet applications.
> > > > All opinions are mine and do not necessarily go in line with those of my employer.
> > > >
> > > >
> > > > "Carlos Alberto" <calberto2312_at_hotmail.com> wrote in message
> > > > news:72954535.0206181248.c72c06e_at_posting.google.com...
> > > > > Hi all,
> > > > >
> > > > > I´m experiencing a strange behaviour with Oracle 8.0.6.0.0 : I have
> > > > > a 100 million records table, and based on a condition I loaded 47
> > > > > million of these records into another table, by INSERT .... SELECT
> > > > > .... command. The big table has 21GB, the new one has 12GB. The
> > > > > structure of them are the same, the only difference is the tablespace
> > > > > where each one resides. The strange is when I run the following script
> > > > > :
> > > > >
> > > > > set timing on;
> > > > >
> > > > > select count(1) from BIG_TABLE;
> > > > >
> > > > > --> 20 minutes
> > > > >
> > > > > select count(1) from SMALL_TABLE;
> > > > >
> > > > > --> 1 hour and 10 minutes!!!
> > > > >
> > > > > How can this happen? When I run again :
> > > > >
> > > > > select count(1) from SMALL_TABLE;
> > > > >
> > > > > --> 6 minutes!!!
> > > > >
> > > > > The result now is normal. This behaviour also happens when I run
> > > > > some other SQL, which access some index of the table. The hardware is
> > > > > Sun E10K with EMC Storage. Regarding EMC, the disks, controllers and
> > > > > cache are fine. Can anyone have some hint??
> > > > >
> > > > > Thanks in advanced,
> > > > > Carlos