Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: RE: Time to read 6000 (block size 2k) blocks

Re: RE: Time to read 6000 (block size 2k) blocks

From: <J.Velikanovs_at_alise.lv>
Date: Fri, 6 Aug 2004 18:02:02 +0300
Message-ID: <OFDD35C9BD.143875F6-ONC2256EE8.0052605F-C2256EE8.005357E4@alise.lv>


jaysingh1_at_optonline.net,
Can you try:

SELECT --+ ORDERED USE_NL(P, E)

                 DISTINCT P.PROFILEDUSERID PROFILEDUSERID,
                 SEARCH_LAST_NAME,
                 SEARCH_FIRST_NAME
FROM             EXTENDEDATTRIBUTES E,
                 PROFILEDUSER P
WHERE            P.PROFILEDUSERID = E.PROFILEDUSERID
AND              P.SEARCH_COMPANY_NAME LIKE 'ACME%' ESCAPE '/'
AND              E.CUSTOMERID = 'ABCDEFGH'
AND              HSBC_USER_CATEGORY IN ('VAL1','VAL2')
AND              ROWNUM < 150

ORDER BY SEARCH_LAST_NAME,SEARCH_FIRST_NAME Jurijs
+371 9268222 (+2 GMT)

Thank you for teaching me.
http://otn.oracle.com/ocm/jvelikanovs.html

jaysingh1_at_optonline.net
Sent by: oracle-l-bounce_at_freelists.org
06.08.2004 17:52
Please respond to oracle-l  

        To:     oracle-l_at_freelists.org
        cc: 
        Subject:        Re: RE: Time to read 6000 (block size 2k) blocks


Cary,

Thanks for your response. I ahve enclosed all the details here. Generated extended tarce with level12.

SELECT           DISTINCT P.PROFILEDUSERID PROFILEDUSERID,
                 SEARCH_LAST_NAME,
                 SEARCH_FIRST_NAME
FROM             PROFILEDUSER P ,
                 EXTENDEDATTRIBUTES E
WHERE            P.PROFILEDUSERID = E.PROFILEDUSERID
AND              P.SEARCH_COMPANY_NAME LIKE 'ACME%' ESCAPE '/'
AND              E.CUSTOMERID = 'ABCDEFGH'
AND              HSBC_USER_CATEGORY IN ('VAL1','VAL2')
AND              ROWNUM < 150

ORDER BY SEARCH_LAST_NAME,SEARCH_FIRST_NAME call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ----------
Parse        1      0.01       0.01          0          0          0     0
Execute      1      0.00       0.00          0          0          0     0
Fetch        2     11.39      41.98       6126      18805          0     1
------- ------ -------- ---------- ---------- ---------- ----------

total 4 11.40 41.99 6126 18805 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 180

Rows Row Source Operation

-------  ---------------------------------------------------
      1  SORT UNIQUE 
      1   COUNT STOPKEY 
      1    NESTED LOOPS 
   4766     TABLE ACCESS BY INDEX ROWID PROFILEDUSER (it has 450,000 rows)
   4767      INDEX RANGE SCAN (PROFILEDUSER_IX03)
      1     TABLE ACCESS BY INDEX ROWID EXTENDEDATTRIBUTES (it has 15,000 
rows)
   9530      INDEX UNIQUE SCAN (ATTRIBUTES_PK)
 

SQL> select column_name from dba_ind_columns where index_name='PROFILEDUSER_IX03';  

COLUMN_NAME



SEARCH_COMPANY_NAME   SQL> select column_name from dba_ind_columns where index_name='ATTRIBUTES_PK';  

COLUMN_NAME



PROFILEDUSERID

> Yes, this is actually 0.003568s/read, which is pretty good.
>
> A better question, though, is, "Does the application really need
> to make
> 18,805 visits to the database buffer cache to return just one row?"
>
> Unless your query uses some kind of aggregation function to return the
> single row (count, sum, etc.), then you should be able to make
> this SQL =
> do
> its job with 10-20 LIOs instead of 18,805. If you can do that, you =
> should be
> able to reduce response time from 41.99s to about 0.04s.
>
> I can't see your SQL here, but because there were 2 fetch calls,
> I'll =
> bet
> that you're not aggregating the result, and that you should be
> able to =
> get
> to the 0.04s response time target. It might be as simple as a
> missing =
> index,
> or SQL that debilitates the use of an index.
>
>
> Cary Millsap
> Hotsos Enterprises, Ltd.
> http://www.hotsos.com
> * Nullius in verba *
>
> Upcoming events:
> - Performance Diagnosis 101: 8/10 Boston, 9/14 San Francisco, 10/5 =
> Charlotte
> - SQL Optimization 101: 7/26 Washington DC, 8/16 Minneapolis, 9/20 =
> Hartford
> - Hotsos Symposium 2005: March 6-10 Dallas
> - Visit www.hotsos.com for schedule details...
>
>
> -----Original Message-----
> From: oracle-l-bounce_at_freelists.org =
> [oracle-l-bounce_at_freelists.org]
> On Behalf Of Khedr, Waleed
> Sent: Friday, August 06, 2004 9:19 AM
> To: oracle-l_at_freelists.org
> Subject: RE: Time to read 6000 (block size 2k) blocks
>
> Five millisecond is not bad for single block sequential read.
>
> Waleed
>
> -----Original Message-----
> From: jaysingh1_at_optonline.net [jaysingh1_at_optonline.net]=3D20
> Sent: Friday, August 06, 2004 9:49 AM
> To: oracle-l_at_freelists.org
> Subject: Time to read 6000 (block size 2k) blocks
>
>
> Hi All,
>
> The question may be wispy.
> We have 14 CPU sun box,8i 2 node OPS. Not under heavy load.
>
> In our case it is taking 21.86 sec for 6126 blocks (from disk)
>
> db file sequential read 6126 0.29
> 21.86
>
> Approximately how long it should take to read 6000 blocks?
>
>
> call count cpu elapsed disk query current
> rows
> ------- ------ -------- ---------- ---------- ---------- ---------
> -
> ----------
> Parse 1 0.01 0.01 0 0 0
> 0
> Execute 1 0.00 0.00 0 0 0
> 0
> Fetch 2 11.39 41.98 6126 18805 0
> 1
> ------- ------ -------- ---------- ---------- ---------- ---------
> -
> ----------
> total 4 11.40 41.99 6126 18805 0
> 1
>
>
>
> db file sequential read 6126 0.29
> 21.86
>
> Thanks
> Sami
>
>
> ----------------------------------------------------------------
> Please see the official ORACLE-L FAQ: http://www.orafaq.com
> ----------------------------------------------------------------
> To unsubscribe send email to: oracle-l-request_at_freelists.org
> put 'unsubscribe' in the subject line.
> --
> Archives are at http://www.freelists.org/archives/oracle-l/
> FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
> -----------------------------------------------------------------
> ----------------------------------------------------------------
> Please see the official ORACLE-L FAQ: http://www.orafaq.com
> ----------------------------------------------------------------
> To unsubscribe send email to: oracle-l-request_at_freelists.org
> put 'unsubscribe' in the subject line.
> --
> Archives are at http://www.freelists.org/archives/oracle-l/
> FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
> -----------------------------------------------------------------
>
> ----------------------------------------------------------------
> Please see the official ORACLE-L FAQ: http://www.orafaq.com
> ----------------------------------------------------------------
> To unsubscribe send email to: oracle-l-request_at_freelists.org
> put 'unsubscribe' in the subject line.
> --
> Archives are at http://www.freelists.org/archives/oracle-l/
> FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
> -----------------------------------------------------------------
>



Please see the official ORACLE-L FAQ: http://www.orafaq.com

To unsubscribe send email to: oracle-l-request_at_freelists.org put 'unsubscribe' in the subject line.
--
Archives are at http://www.freelists.org/archives/oracle-l/
FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------



----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request_at_freelists.org
put 'unsubscribe' in the subject line.
--
Archives are at http://www.freelists.org/archives/oracle-l/
FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------
Received on Fri Aug 06 2004 - 10:06:11 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US