Home » RDBMS Server » Performance Tuning » Optimal Chunk size for update of 50M record table (Oracle 11g RAC, Linux OEL 6)
Re: Optimal Chunk size for update of 50M record table [message #672958 is a reply to message #672954] Thu, 01 November 2018 08:21 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
So DELETE session is doing disk reads & writes; which shows movement.
Never confuse movement with progress.
Going around in circles is movement; but most folks do not consider it to be progress.
You could enable SQL_TRACE to observe which object(s) are being read & written (table, index, other?)
How many INDEX on this table?
Any TRIGGER on this table? If so, what exactly?

Does DELETE behavior change when no PARALLEL is involved with removing rows?
Re: Optimal Chunk size for update of 50M record table [message #672976 is a reply to message #672958] Thu, 01 November 2018 09:01 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
Is this table part of Primary Key/Foreign Key relationship?
Is Cascade DELETE involved here?
Re: Optimal Chunk size for update of 50M record table [message #673055 is a reply to message #672958] Sat, 03 November 2018 19:18 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member

BlackSwan wrote on Thu, 01 November 2018 15:21
So DELETE session is doing disk reads & writes; which shows movement.
Never confuse movement with progress.
Going around in circles is movement; but most folks do not consider it to be progress.
You could enable SQL_TRACE to observe which object(s) are being read & written (table, index, other?)


It finished and it took 48 hours.

I bet there is a place where it gets stuck, there must be a weak link...

I will rollback and retry with enabling SQL_TRACE as suggested.


How many INDEX on this table?

As I already posted about the indexes :


SQL>
SQL> select segment_type, count(*), sum(bytes / 1024 / 1024 / 1024) gb
  2    from user_segments
  3   where (segment_name = 'MYTAB' and segment_type = 'TABLE')
  4      or (segment_name in
  5         (select index_name from user_indexes where table_name = 'MYTAB') and
  6         segment_type = 'INDEX')
  7   group by segment_type;

SEGMENT_TYPE                           COUNT(*)         GB
------------------------------------ ---------- ----------
TABLE                                         1 18.7900391
INDEX                                        27 103.360352

Elapsed: 00:00:00.09
SQL>
SQL>

Quote:
Any TRIGGER on this table? If so, what exactly?


No Triggers on Delete on this table:

SQL> set lines 900 pages 20000
col trigger_type for a30
col triggering_event for a30
col status for a15
select trigger_type,triggering_event,status from dba_triggers where table_name = 'MYTAB';SQL> SQL> SQL> SQL>

TRIGGER_TYPE                   TRIGGERING_EVENT               STATUS
------------------------------ ------------------------------ ---------------
BEFORE EACH ROW                INSERT OR UPDATE               DISABLED
BEFORE EACH ROW                INSERT OR UPDATE               ENABLED
BEFORE EACH ROW                INSERT OR UPDATE               ENABLED

Elapsed: 00:00:00.06



Quote:
Does DELETE behavior change when no PARALLEL is involved with removing rows?
To answer this I need to run it with no Parallel involved and see if it changes with regard to locating the records and accessing them.
And in the end to check if it exceeds 48 hours...
Re: Optimal Chunk size for update of 50M record table [message #673056 is a reply to message #673055] Sat, 03 November 2018 19:33 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
my head hurts.
> - Large table - 50M records Of which I need to update 40M records.
UPDATE or DELETE?

SEGMENT_TYPE                           COUNT(*)         GB
------------------------------------ ---------- ----------
TABLE                                         1 18.7900391
INDEX                                        27 103.360352

For every table row that gets DELETED, 27 INDEXES need to be modified
It might be faster to DROP all INDEXES prior to DELETE start to avoid all that overhead & then rebuild INDEXES after DELETE completes
Re: Optimal Chunk size for update of 50M record table [message #673057 is a reply to message #673056] Sat, 03 November 2018 19:48 Go to previous messageGo to next message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
BlackSwan wrote on Sun, 04 November 2018 02:33
my head hurts.
> - Large table - 50M records Of which I need to update 40M records.
UPDATE or DELETE?

SEGMENT_TYPE                           COUNT(*)         GB
------------------------------------ ---------- ----------
TABLE                                         1 18.7900391
INDEX                                        27 103.360352

For every table row that gets DELETED, 27 INDEXES need to be modified
It might be faster to DROP all INDEXES prior to DELETE start to avoid all that overhead & then rebuild INDEXES after DELETE completes
Sorry for your headache, but why are you quoting things from a month ago, when we already discussed the original post, which lacked details, which I proposed to re-write because you would not let go until *all* details are correct , and as I cannot go into the past and unmake all inaccuracies I've made, I proposed to re-write a new post.

You in reply asked me to phrase a distinct problem I want to solve, so I did, and posted all the examples in the world.

Now you are asking information I already supplied and then when I post it like asked - saying your head hurts and again quoting something from a month back,
Seem unsatisfied with info I gave out a week ago when we *started over with all relevant and explained details* which no one commented ( about 27 indexes ),
and proposing a solution that has an impact on production, which is something I explained not an option.

Now, it's ok, you don't owe me anything, and off course I am grateful for someone trying to help and address my questions on the internet.
But constantly opposing, not giving the same attention as demanding from me, and in general giving me a hard time for nothing - I don't know what did I do to deserve this.

I already explained that I am deleting rows from a 83M row table , the deletion is 36M rows, there are 27 indexes...

Making my problem a nail when you have a hammer at hand and a headache is not a way to go...



Re: Optimal Chunk size for update of 50M record table [message #673059 is a reply to message #673056] Sun, 04 November 2018 02:07 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
BlackSwan wrote on Sun, 04 November 2018 00:33
my head hurts.
> - Large table - 50M records Of which I need to update 40M records.
UPDATE or DELETE?

SEGMENT_TYPE                           COUNT(*)         GB
------------------------------------ ---------- ----------
TABLE                                         1 18.7900391
INDEX                                        27 103.360352

For every table row that gets DELETED, 27 INDEXES need to be modified
It might be faster to DROP all INDEXES prior to DELETE start to avoid all that overhead & then rebuild INDEXES after DELETE completes
27 indexes?? Holy smoke! I missed that before. That is why you have all those waits on sequential read. If you do a little test such as this
conn scott/tiger
drop table t1;
create table t1 as select * from all_objects;
conn scott/tiger
delete from t1;
select event,total_waits,time_waited from v$session_event where sid=(select max(sid) from v$mystat);
drop table t1;
create table t1 as select * from all_objects;
create index i1 on t1(object_name);
conn scott/tiger
delete from t1;
select event,total_waits,time_waited from v$session_event where sid=(select max(sid) from v$mystat);
you will see how index maintenance skews the waits from scattered reads to sequential reads. So there may be nothing you can do about it apart from throw money at the problem: more buffer cache, more parallelism, better performing storage.
Before doing that, I would check the IO. It is no good asking your sys admins, they will always (always, always, always) say that everything is fine, without checking properly. It could be the discs layer is indeed fine, but the fibre switch (or whatever it is) ain't configured properly. Or that async IO ain't working correctly. Any number of things.



Re: Optimal Chunk size for update of 50M record table [message #673105 is a reply to message #673059] Tue, 06 November 2018 03:10 Go to previous message
Andrey_R
Messages: 441
Registered: January 2012
Location: Israel
Senior Member
John Watson wrote on Sun, 04 November 2018 10:07
BlackSwan wrote on Sun, 04 November 2018 00:33
my head hurts.
> - Large table - 50M records Of which I need to update 40M records.
UPDATE or DELETE?

SEGMENT_TYPE                           COUNT(*)         GB
------------------------------------ ---------- ----------
TABLE                                         1 18.7900391
INDEX                                        27 103.360352

For every table row that gets DELETED, 27 INDEXES need to be modified
It might be faster to DROP all INDEXES prior to DELETE start to avoid all that overhead & then rebuild INDEXES after DELETE completes
27 indexes?? Holy smoke! I missed that before. That is why you have all those waits on sequential read. If you do a little test such as this
conn scott/tiger
drop table t1;
create table t1 as select * from all_objects;
conn scott/tiger
delete from t1;
select event,total_waits,time_waited from v$session_event where sid=(select max(sid) from v$mystat);
drop table t1;
create table t1 as select * from all_objects;
create index i1 on t1(object_name);
conn scott/tiger
delete from t1;
select event,total_waits,time_waited from v$session_event where sid=(select max(sid) from v$mystat);
you will see how index maintenance skews the waits from scattered reads to sequential reads. So there may be nothing you can do about it apart from throw money at the problem: more buffer cache, more parallelism, better performing storage.
Before doing that, I would check the IO. It is no good asking your sys admins, they will always (always, always, always) say that everything is fine, without checking properly. It could be the discs layer is indeed fine, but the fibre switch (or whatever it is) ain't configured properly. Or that async IO ain't working correctly. Any number of things.




Thank you John,
And thanks cookiemonster , BlackSwan
Previous Topic: A question about how to code a procedure to use dbms_parallel_execute package for a huge insert
Next Topic: how to avoid window sort?
Goto Forum:
  


Current Time: Thu Mar 28 07:58:10 CDT 2024