Home » RDBMS Server » Performance Tuning » query with high cpu usage (10.2.0.2)
query with high cpu usage [message #380878] Tue, 13 January 2009 14:21 Go to next message
varunvir
Messages: 330
Registered: November 2007
Senior Member
Hi Experts,
Is there any way to tune the following query using lot of CPU:-
select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'
The explain plan for this is query is:-
---------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost  |
----------------------------------------------------------------
|   0 | SELECT STATEMENT  |              |   178K|  6609K|   129K|
|   1 |  TABLE ACCESS FULL| BHI_TRACKING |   178K|  6609K|   129K|
----------------------------------------------------------------

Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this
table and delete the remaining.Can I drop the table
after taking export and then import it back or can i truncate
the table and then insert the rows into it to enhance
the performance.
Regards,
Varun Punj,

Re: query with high cpu usage [message #380879 is a reply to message #380878] Tue, 13 January 2009 14:35 Go to previous messageGo to next message
BlackSwan
Messages: 22911
Registered: January 2009
Senior Member
http://www.orafaq.com/forum/t/84315/74940/
which have you tried & what were the results
Re: query with high cpu usage [message #380907 is a reply to message #380878] Wed, 14 January 2009 00:24 Go to previous messageGo to next message
Michel Cadot
Messages: 59427
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
You can first gather statistics and (maybe) create an index.

Regards
Michel

[Updated on: Wed, 14 January 2009 00:24]

Report message to a moderator

Re: query with high cpu usage [message #380984 is a reply to message #380907] Wed, 14 January 2009 12:31 Go to previous messageGo to next message
varunvir
Messages: 330
Registered: November 2007
Senior Member
Hi Michel,
Most of the issues are solved by gathering statistics on tables
I am wondering when Oracle automatically gather statistics
with the help of dbms_stats,then why do we need to gather stats
fo some special cases.
Can you please tell me how to find from dictionary tables if
statistics  really need to be gathered on particular table
or index
.
Regards,
Varun Punj,
Re: query with high cpu usage [message #380988 is a reply to message #380984] Wed, 14 January 2009 12:52 Go to previous messageGo to next message
Michel Cadot
Messages: 59427
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
The following list objects you must gather statistics:
declare
  l_objList dbms_stats.objectTab;
begin
  dbms_stats.gather_schema_stats
    ( ownname        => user,
      options        => 'LIST AUTO', 
      objlist        => l_objList );
  for i in 1..l_objList.count loop
    dbms_output.put_line (l_objList(i).objName||' - '||l_objList(i).objType);
  end loop;
end;
/

Regards
Michel

[Updated on: Wed, 14 January 2009 12:54]

Report message to a moderator

Re: query with high cpu usage [message #550058 is a reply to message #380988] Thu, 05 April 2012 12:01 Go to previous messageGo to next message
Jack14
Messages: 497
Registered: December 2011
Location: INDIA
Senior Member
Hi Michel,

SQL>
SQL> declare
  2    l_objList dbms_stats.objectTab;
  3  begin
  4    dbms_stats.gather_schema_stats
  5      ( ownname        => 'VISHWA',
  6        options        => 'LIST AUTO',
  7        objlist        => l_objList );
  8    for i in 1..l_objList.count loop
  9      dbms_output.put_line (l_objList(i).objName||' - '||l_objList(i).objType);
 10    end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL>


1.I dont see any objects listed from above procedure.

2.Quote:
The following list objects you must gather statistics:


You mean to say we must gather objects explicitly which are listed from the above procedure ?

Can you please complete your sentence?

Regards,
Re: query with high cpu usage [message #550064 is a reply to message #550058] Thu, 05 April 2012 12:23 Go to previous messageGo to next message
Michel Cadot
Messages: 59427
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
Quote:
You mean to say we must gather objects explicitly which are listed from the above procedure ?


Yes, or let the auto job do it, this is the way it determines the objects to analyze.

By the way, be sure you activate serveroutput when you run this block.

You can check the block using:
create table t (val integer);
insert into t select level from dual connect by level <= 1000;
commit;

If in the result you have not this T table then something is wrong somewhere.

Regards
Michel
Re: query with high cpu usage [message #550146 is a reply to message #550064] Fri, 06 April 2012 05:12 Go to previous messageGo to next message
Jack14
Messages: 497
Registered: December 2011
Location: INDIA
Senior Member
Michel,

If in the result you have not this T table then something is wrong somewhere.


Can you rephrase the above sentence ?

Quote:
SQL> set serveroutput ON
SQL> create table t (val integer);

Table created.

SQL> insert into t select level from dual connect by level <= 1000;

1000 rows created.

SQL> commit;

Commit complete.


SQL> declare
2 l_objList dbms_stats.objectTab;
3 begin
4 dbms_stats.gather_schema_stats
5 ( ownname => 'VISHWA',
6 options => 'LIST AUTO',
7 objlist => l_objList );
8 for i in 1..l_objList.count loop
9 dbms_output.put_line (l_objList(i).objName||' - '||l_objList(i).objType);
10 end loop;
11 end;
12 /
T - TABLE

PL/SQL procedure successfully completed.


Michel,

I followed as per your advice. Created the table and ran the procedure but did not see any objects listed from the procedure.

Please advice.

Regards,
Re: query with high cpu usage [message #550156 is a reply to message #550146] Fri, 06 April 2012 06:26 Go to previous messageGo to next message
Michel Cadot
Messages: 59427
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
Quote:
but did not see any objects listed from the procedure.


So what is:
Quote:
T - TABLE


And this is what I meant by my sentence in which, I admit, 4 words are missing to be clearfully understandable:
Quote:
If in the result [of the PL/SQL block] you have not this T table then something is wrong somewhere


Regards
Michel
Re: query with high cpu usage [message #550207 is a reply to message #380878] Fri, 06 April 2012 15:46 Go to previous messageGo to next message
LNossov
Messages: 289
Registered: July 2011
Location: Germany
Senior Member
Why don't you want to partition your table and to drop partition instead of delete rows?
Re: query with high cpu usage [message #551040 is a reply to message #550207] Sun, 15 April 2012 00:26 Go to previous messageGo to next message
Kevin Meade
Messages: 1943
Registered: December 1999
Location: Connecticut USA
Senior Member
select * from 100_million_row_table;


How do we optimzer the above? We don't. It is by its nature already optimized as far as the query plan goes. There comes a point at which you have an optimal query plan and you are doing the least amount of work possible. Even so, there are things we can do if we really want to:

1) use compression to reduce the size of the table and thus the I/O.

2) run PARALLEL QUERY to use more resources but reduce the elapsed time (OPTIONALLY partition the table).

Faced with the next variation:

select c1,c2,c3 from 100_million_row_table;


1) You might create an index on c1,c2,c3 in order to do a FAST FULL INDEX SCAN of the index instead of going to the table. If the index is much smaller than the table (1/3 the size or less), then you may see reduced I/O for getting the data and thus improved query times.

Then we come to:

select a.*,b.*
from
       100_million_row_table a
     , another_100_million_row_table b
where a.pk = b.fk
/


In addition to all the ideas above, how do we optimize this.

1) make sure your HASH JOIN has enough memory to be either OPTIMAL or ONE-PASS.

2) partition the table on the join keys so you can do PARTITION WISE HASH JOINS.

Moving forward some more, how to we deal with this:

select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'


1) If like 'Multilateral:%' yields less than 2% of the data then an index might be in order for fetching the rows from the table. Otherwise since we are not fetching less than 2% of the rows in the table, seriously consider using FULL TABLE SCAN.

And how would we deal with this:

select a.*,b.*
from
       100_million_row_table a
     , another_100_million_row_table b
where a.pk = b.fk
and a.description like 'Multilateral:%'
and b.description like '<some other string>%'
/

Again if a like expression will yeild LESS than 2% of the data from a table, then for that table an index might be a good way to fetch the data. But once data is fetched from tables we need to join.

If one of the tables yields LESS than 2% of the total rows in the other table than a NESTED LOOP JOIN between the two tables supported by an INDEX LOOKUP might be in order.

Otherwise a HASH JOIN between rows fetched from the two tables is the better choice regardless of how the data was actually fetched from the tables (FULL TABLE SCAN or INDEX ACCESS).

OK, so that is the basics I guess for today's Oracle. Remember, I said BASICS. There are always special situations. But this basic understanding will get you through 90% of your SQL tuning.

Good luck, Kevin

[Updated on: Sun, 15 April 2012 00:30]

Report message to a moderator

Re: query with high cpu usage [message #551057 is a reply to message #551040] Sun, 15 April 2012 04:42 Go to previous messageGo to next message
cookiemonster
Messages: 11076
Registered: September 2008
Location: Rainy Manchester
Senior Member
@LNossov & Kevin

This thread is originally from 2009, and was resurrected recently by Jack14 with a question about Michel's code snippet.
I doubt the OP is paying attention.
Re: query with high cpu usage [message #551061 is a reply to message #551057] Sun, 15 April 2012 05:15 Go to previous messageGo to next message
John Watson
Messages: 4691
Registered: January 2010
Location: Global Village
Senior Member
This may be an old topic, but some skilled people have been contributing. My contribution would be to say "trust the optimizer", but I'ld be interested if other people agree. Take a look at this, Kevin's example of joining a pair of hundred million row tables:
orcl> create table a(pk number, description varchar2(20), constraint apk primary key (pk));

Table created.

orcl> create table b(pk number, description varchar2(20), constraint bpk primary key (pk));

Table created.

orcl> exec dbms_stats.set_table_stats('scott','a',numrows=>100000000)

PL/SQL procedure successfully completed.

orcl> exec dbms_stats.set_table_stats('scott','b',numrows=>100000000)

PL/SQL procedure successfully completed.

orcl> set autotr traceonly exp
orcl> select * from a join b using (pk);

Execution Plan
----------------------------------------------------------
Plan hash value: 652036164

-------------------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |   100M|  4768M|       |   351K  (1)| 00:01:09 |
|*  1 |  HASH JOIN                 |      |   100M|  4768M|  3528M|   351K  (1)| 00:01:09 |
|   2 |   TABLE ACCESS STORAGE FULL| A    |   100M|  2384M|       |   552  (95)| 00:00:01 |
|   3 |   TABLE ACCESS STORAGE FULL| B    |   100M|  2384M|       |   552  (95)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"."PK"="B"."PK")

orcl>
A hash join, as expected. But increase the tables to a hundred billion, and what happens?
orcl> exec dbms_stats.set_table_stats('scott','a',numrows=>100000000000)

PL/SQL procedure successfully completed.

orcl> exec dbms_stats.set_table_stats('scott','b',numrows=>100000000000)

PL/SQL procedure successfully completed.

orcl> select * from a join b using (pk);

Execution Plan
----------------------------------------------------------
Plan hash value: 1805968304

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |      |   100G|  4656G|       |  1227M  (1)| 66:35:51 |
|   1 |  MERGE JOIN                  |      |   100G|  4656G|       |  1227M  (1)| 66:35:51 |
|   2 |   TABLE ACCESS BY INDEX ROWID| A    |   100G|  2328G|       |   826   (0)| 00:00:01 |
|   3 |    INDEX FULL SCAN           | APK  |   100G|       |       |    26   (0)| 00:00:01 |
|*  4 |   SORT JOIN                  |      |   100G|  2328G|  6751G|  1227M  (1)| 66:35:51 |
|   5 |    TABLE ACCESS STORAGE FULL | B    |   100G|  2328G|       |   523K(100)| 00:01:43 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("A"."PK"="B"."PK")
       filter("A"."PK"="B"."PK")

orcl>
and hinting a hash join looks ptretty disastrous:
orcl> select /*+ use_hash(a b) */ * from a join b using (pk);

Execution Plan
----------------------------------------------------------
Plan hash value: 652036164

-------------------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |   100G|  4656G|       |    54G  (1)|999:59:59 |
|*  1 |  HASH JOIN                 |      |   100G|  4656G|  3445G|    54G  (1)|999:59:59 |
|   2 |   TABLE ACCESS STORAGE FULL| A    |   100G|  2328G|       |   523K(100)| 00:01:43 |
|   3 |   TABLE ACCESS STORAGE FULL| B    |   100G|  2328G|       |   523K(100)| 00:01:43 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"."PK"="B"."PK")

orcl>
But the clever bit comes when we let the optimizer loose with automatic parallelism:
orcl> alter session set parallel_degree_policy=auto;

Session altered.

orcl> select * from a join b using (pk);

Execution Plan
----------------------------------------------------------
Plan hash value: 2461430291

------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |          |   100G|  4656G|       |   340M  (1)| 18:29:43 |        |      |            |
|   1 |  PX COORDINATOR                    |          |       |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10001 |   100G|  4656G|       |   340M  (1)| 18:29:43 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    MERGE JOIN                      |          |   100G|  4656G|       |   340M  (1)| 18:29:43 |  Q1,01 | PCWP |            |
|   4 |     SORT JOIN                      |          |   100G|  2328G|       |   826   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      BUFFER SORT                   |          |       |       |       |            |          |  Q1,01 | PCWC |            |
|   6 |       PX RECEIVE                   |          |   100G|  2328G|       |   826   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|   7 |        PX SEND BROADCAST           | :TQ10000 |   100G|  2328G|       |   826   (0)| 00:00:01 |        | S->P | BROADCAST  |
|   8 |         TABLE ACCESS BY INDEX ROWID| A        |   100G|  2328G|       |   826   (0)| 00:00:01 |        |      |            |
|   9 |          INDEX FULL SCAN           | APK      |   100G|       |       |    26   (0)| 00:00:01 |        |      |            |
|* 10 |     SORT JOIN                      |          |   100G|  2328G|  6751G|   340M  (1)| 18:29:43 |  Q1,01 | PCWP |            |
|  11 |      PX BLOCK ITERATOR             |          |   100G|  2328G|       |   145K(100)| 00:00:29 |  Q1,01 | PCWC |            |
|  12 |       TABLE ACCESS STORAGE FULL    | B        |   100G|  2328G|       |   145K(100)| 00:00:29 |  Q1,01 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  10 - access("A"."PK"="B"."PK")
       filter("A"."PK"="B"."PK")

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 4 because of degree limit

orcl>
See how the time prediction comes down? There is a lot you can do with parallel_degree_policy, I seriously believe that it is a huge leap forward. And I love the way that I, as DBA, can enable it declaratively, without having to decorate tables or tune SQL.
Re: query with high cpu usage [message #551089 is a reply to message #551061] Sun, 15 April 2012 10:25 Go to previous messageGo to next message
Kevin Meade
Messages: 1943
Registered: December 1999
Location: Connecticut USA
Senior Member
Very cool stuff to add, thanks. The extreme screnarios are always interesting examples.

In my defense I would say that though they do exist, I don't have any tables in any systems I work with that have 100 billion rows. Some day soon.

Your point is well presented though; "trust the optimizer".

Continuing in that vein, I would add that to be able to trust the optimizer, we need to give it the information it needs to do its job well. To me this means paying attention to the basics of relational databases.

1) use proper datatypes
2) use proper nullability
3) start with a normalized relational model
4) define PK/UK/FK constraints
5) collect stats with some intelligence
6) turn on costing

If you just do the basics that you are supposed to do when you design a database, 99% of all performance problems you might face will never happen in the first place. I can easily trace half the serious problems I deal with every day back to some violation of the above which kept the optimizer from doing its job.

Thanks John, I have always loved your follow ups.

As for this being an old post, sure, but someone is reading it so I don't have an issue adding to it. It is just more information on the web for people to find with a search. I have never understood why people object to adding new info to old posts.

At the expense of making my contributions sound weaker, everything can change and your mileage may vary. But the numbers I quote (like < 2% of the data (maybe next year it will be 1%)) and the strategies I use are born from my day-to-day experiences. I do in fact tune quite well using these strategies. I also believe in the three concepts:

1) simplify simplify simplify
2) 99% rule
3) bring it to the masses and it has great value

I believe Oracle is a case study simple methods that do what we need 99% of the time which can be taught to everybody. In fact, I like my summary in this thread so much I think I will add it in some form to my training class for my company.

Kevin
Re: query with high cpu usage [message #551196 is a reply to message #551089] Mon, 16 April 2012 03:48 Go to previous message
cookiemonster
Messages: 11076
Registered: September 2008
Location: Rainy Manchester
Senior Member
Kevin Meade wrote on Sun, 15 April 2012 16:25
As for this being an old post, sure, but someone is reading it so I don't have an issue adding to it. It is just more information on the web for people to find with a search. I have never understood why people object to adding new info to old posts.


I wasn't objecting, merely pointing out a fact that LNossov almost certainly missed since he was asking a question of the OP.
I have no issue with what you added, just warning you not to expect any feedback from the OP.
Previous Topic: DB performance keys
Next Topic: Oracle Apps/Software installed in a dB Server
Goto Forum:
  


Current Time: Fri Oct 24 11:33:14 CDT 2014

Total time taken to generate the page: 0.13106 seconds