Home » RDBMS Server » Performance Tuning » 9582.69043 gigabytes of physical read total bytes and increasing! (11gR2 on Linux RedHat)
icon8.gif  9582.69043 gigabytes of physical read total bytes and increasing! [message #500185] Thu, 17 March 2011 22:49 Go to next message
nabeelkhan
Messages: 73
Registered: May 2005
Location: Kuwait
Member

In EM
Database Instance: PROD > Top Activity > I get following

physical read total bytes 62762844160 8082658164736 4183631872
cell physical IO interconnect bytes 62762844160 8082658164736 4183631872
physical read bytes 62762844160 8082658164736 4183631872

And the session is running following update procedure:


declare
FM_BBBB MT.BBBB_CODE%TYPE;
l_start NUMBER;
cursor code_upd is select /*+ parallel(FM_KWT_POP_BBBB_MISMATCH, 10) */ DDD_CID, DDD_BBBB, CCCC_BBBB from MT_MISMATCH;
begin

-- Time regular updates.
l_start := DBMS_UTILITY.get_time;

FOR rec IN code_upd LOOP
update /*+ parallel(MT, 10) nologging */ MT
set BBBB_code = rec.CCCC_BBBB
where source= 0
and cid_no = rec.DDD_CID
and BBBB_code = rec.DDD_BBBB;
commit;

END LOOP;

DBMS_OUTPUT.put_line('Bulk Updates : ' || (DBMS_UTILITY.get_time - l_start));

end;


There are 9.5 million records in MT but source=0 have only 3 million records and 376K records in MT_MISMATCH, What I don't understand why this is taking so much of time and so many bytes read? Both Tables are analyzed before running this procedure.

Can someone shed some light on this? Is there any better way of doing the same job?

[Updated on: Thu, 17 March 2011 22:56]

Report message to a moderator

Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500186 is a reply to message #500185] Thu, 17 March 2011 22:57 Go to previous messageGo to next message
nabeelkhan
Messages: 73
Registered: May 2005
Location: Kuwait
Member

10 754.8389 gigabytes During the 47th Min of running
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500187 is a reply to message #500186] Thu, 17 March 2011 23:09 Go to previous messageGo to next message
BlackSwan
Messages: 22489
Registered: January 2009
Senior Member
two flaws exist in posted code.
1)COMMIT inside LOOP
2) using PL/SQL (row by row) instead of SQL-only

It would be helpful if you followed Posting Guidelines - http://www.orafaq.com/forum/t/88153/0/

SQL_TRACE=TRUE then process with TKPROF & post results
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500188 is a reply to message #500187] Thu, 17 March 2011 23:25 Go to previous messageGo to next message
nabeelkhan
Messages: 73
Registered: May 2005
Location: Kuwait
Member

BlackSwan wrote on Fri, 18 March 2011 07:09

2) using PL/SQL (row by row) instead of SQL-only


Can you please elaborate more on this one?

I will post the result of TKPROF

Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500189 is a reply to message #500185] Thu, 17 March 2011 23:31 Go to previous messageGo to next message
flyboy
Messages: 1769
Registered: November 2006
Senior Member
nabeelkhan wrote on Fri, 18 March 2011 04:49
Can someone shed some light on this?

Without knowing the table structures (relevant columns, constraints and indexes), it is just a guessing, like: there is no index on (SOURCE, CID_NO) columns in MT table, so for each loop iteration this table has to be full scanned (reading 376k times 9.5M = over 3.5T rows).
nabeelkhan wrote on Fri, 18 March 2011 04:49
Is there any better way of doing the same job?

Probably one UPDATE statement. How many rows are in MT with SOURCE = 0? If DDD_CID is at least unique MT_MISMATCH (and it should otherwise the result of that block would be nondeterministic), it would be as simple as (not tested):
update MT
set BBBB_code = ( select CCCC_BBBB
                  from MT_MISMATCH
                  where ddd_cid = mt.cid_no
                    and ddd_bbbb = mt.bbbb_code )
where source = 0
  and exists ( select CCCC_BBBB
               from MT_MISMATCH
               where ddd_cid = mt.cid_no
                 and ddd_bbbb = mt.bbbb_code );
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500190 is a reply to message #500189] Thu, 17 March 2011 23:39 Go to previous messageGo to next message
nabeelkhan
Messages: 73
Registered: May 2005
Location: Kuwait
Member

flyboy wrote on Fri, 18 March 2011 07:31

Without knowing the table structures (relevant columns, constraints and indexes), it is just a guessing, like: there is no index on (SOURCE, CID_NO) columns in MT table, so for each loop iteration this table has to be full scanned (reading 376k times 9.5M = over 3.5T rows).


There is an index on both columns
flyboy wrote on Fri, 18 March 2011 07:31

Probably one UPDATE statement. How many rows are in MT with SOURCE = 0?


3.1 Million

flyboy wrote on Fri, 18 March 2011 07:31

If DDD_CID is at least unique MT_MISMATCH (and it should otherwise the result of that block would be nondeterministic), it would be as simple as (not tested):

update MT
set BBBB_code = ( select CCCC_BBBB
                  from MT_MISMATCH
                  where ddd_cid = mt.cid_no
                    and ddd_bbbb = mt.bbbb_code )
where source = 0
  and exists ( select CCCC_BBBB
               from MT_MISMATCH
               where ddd_cid = mt.cid_no
                 and ddd_bbbb = mt.bbbb_code );


Let me try your example and see if that works, I killed the session and at the time of killing it was 27 TB
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500214 is a reply to message #500190] Fri, 18 March 2011 03:19 Go to previous messageGo to next message
Roachcoach
Messages: 1178
Registered: May 2010
Location: UK
Senior Member
Its early but I think I'm correct in saying that they way that is running is accessing table MT 376k times. Now depending on the indexing (or rather what, if anything, oracle is doing with them) it is conceivable that code is doing 376k FTS of the table MT.

Is the MT table wide? i.e. lots of columns? I see you have given a row count, but is it a big table in terms of M/Gb size? If it is a fat table it may not be able to fit into the cache and accessing it 376k times might account for the physical reads you're seeing.

Not really sure why its forced into parallel to that degree, but that's a sideline, my main point being in the past I have seen parallel processing at higher degress stop using indexes in favour of FTS, this was on older versions doing specific queries but it may be useful regardless.
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500299 is a reply to message #500214] Sat, 19 March 2011 00:47 Go to previous messageGo to next message
nabeelkhan
Messages: 73
Registered: May 2005
Location: Kuwait
Member

First of all thank you for your input as it gave me the idea to rewrite the sql statement, by the way the statement which you gave me was taking about 3 Hours and about 45 Minutes to update 100K records. So I added the hint
update /*+ INDEX_COMBINE(MT) */ MT
   set BBBB_code = (select CCCC_BBBB
                      from MT_MISMATCH
                     where ddd_cid= MT.cid_no
                       and ddd_bbbb = MT.AREA_CODE)
 where source = 0
   and EXISTS (SELECT 'X'
                 from MT_MISMATCH
                WHERE (ddd_cid, ddd_bbbb ) = ((MT.cid_no, MT.BBBB_code )))
and rownum < 100001


This hint reduced the time by 45 minutes but still it was talking quite long. So I decided to find something better, and after a series of error and trial I got
update MT
set BBBB_code = ( select CCCC_BBBB
                  from MT_MISMATCH
                  where ddd_cid = mt.cid_no
                    and ddd_bbbb = mt.bbbb_code )
 where source = 0
   and (MT.cid_no, MT.BBBB_code ) IN (SELECT ddd_cid + 0,
                                                                          ddd_bbbb || ''
                                                                     from MT_MISMATCH)
   and rownum < 100001


This code finally shaved off 65% of the total time. Surprised

As of now I have manual updates on the same table so I am thinkin to move it to a 4k block size tablespace... what do you think about it?
Re: 9582.69043 gigabytes of physical read total bytes and increasing! [message #500312 is a reply to message #500299] Sat, 19 March 2011 03:12 Go to previous message
John Watson
Messages: 4374
Registered: January 2010
Location: Global Village
Senior Member
Quote:
I am thinkin to move it to a 4k block size tablespace

I do not believe that there would be any point in this. I know all the theory about different block sizes - but I have never seen any evidence to support it. This is a quote from the current release of the Oracle Uni performance tuning course:
"In recent years all the TPC performance tests have used an 8 KB block size. Larger and smaller block sizes have not given significant performance benefits."

I would be interested to know if anyone does have real evidence. I am increasingly seeing 16K used, particularly with RAC. I hear suggestions such as "the interconnect protocols will be used more efficiently with larger blocks". But before recommending such a huge change as moving to a different blocksize, I need some proof.

An in the meantime, I'll follow the old rule: "no-one ever got fired for choosing 8K blocks".
Previous Topic: INDEXES on TIMESTAMP
Next Topic: SQL Tuning ...
Goto Forum:
  


Current Time: Thu Jul 24 01:26:09 CDT 2014

Total time taken to generate the page: 0.28953 seconds