Home » SQL & PL/SQL » SQL & PL/SQL » Adding column to the table which have the huge data (Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production)
Adding column to the table which have the huge data [message #665559] Mon, 11 September 2017 03:03 Go to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Hi All,
We had requirement that I need to alter table, which contains huge data nearly 25 millions records.
Its range partitioned table on a dated based column and having 60 Partitions .
We want to add some extra columns to the same table.Some of the columns are having default value also and some of them don't have.
But I have more columns to add.

Obviously it will take more time.What is the efficient way of doing this ?

Do I need to disable all the indexes or Table gathering

I know that when we have bulk insertions into the table, disabling the indexes will help .
When we are selecting the data by applying indexed column where conditions , gathering statistics will help .

But in this case

PARALLE HINT will help ?
or
Multi columns alter will help ?
or
Table Redefinition will help ?
or
Is there any other best solution?

Please help me to resolve the problem

Details : Table count : 25 Million Records
No of Partitions : 60
Partition Type : Range Partition on Entry Date Column
Oracle Version :Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

Thanks
Revathi.T
Re: Adding column to the table which have the huge data [message #665565 is a reply to message #665559] Mon, 11 September 2017 05:22 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
Welcome to the forum. Please read our OraFAQ Forum Guide and How to use [code] tags and make your code easier to read

You have thrown lot of questions into one topic, which makes it difficult to answer. I'll deal with just the first one:

Quote:
We want to add some extra columns to the same table.Some of the columns are having default value also and some of them don't have.
But I have more columns to add.

Obviously it will take more time.What is the efficient way of doing this ?
Adding a column, with or without a default value, is a DDL operation and takes no time. The fact that your table has 25m rows is not relevant.

Before considering your other questions, I would ask why you weant to partition such a small table. You will probably find that partitioning it into 60 partitions will tend to supress index usage and you'll get a lot of scans. Probably direct. Is this really what you want?

by the way, I wish you wouldn't say "record" when you mean "row".
Re: Adding column to the table which have the huge data [message #665569 is a reply to message #665565] Mon, 11 September 2017 05:49 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Hi John,

Sorry for creating confusion by multiple questions and thanks for answering those with patience.

That table is already Partitioned in production. We will get nearly 800,000 to 12L rows on daily basis.
We have a Partition movement job on weekly basis which will move the earlier data into some other history tables .

In order to give the support for partition movement, some tables are partitioned.Now we need to add some extra columns to those tables

Please help me to provide efficient solution.

Thanks
Revathi.T
Re: Adding column to the table which have the huge data [message #665575 is a reply to message #665569] Mon, 11 September 2017 06:43 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
Quote:
Please help me to provide efficient solution.
I've already told you. Just add the columns. Adding columns is near instantaneous.

What does "12L" mean?
Re: Adding column to the table which have the huge data [message #665576 is a reply to message #665575] Mon, 11 September 2017 06:57 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3269
Registered: January 2010
Location: Connecticut, USA
Senior Member
John, OP mentioned some to be added columns have default values which means Oracle will issue update of existing rows to set new column value to default, unless column is defined as not null and OP is on newer version (I don't recall in what version Oracle made that enhancement).

SY.
Re: Adding column to the table which have the huge data [message #665579 is a reply to message #665576] Mon, 11 September 2017 07:20 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
No, it is just a data dictionary change. The default value isn't written to existing rows. I forget what release that started. Certainly by 11.
Re: Adding column to the table which have the huge data [message #665580 is a reply to message #665579] Mon, 11 September 2017 07:32 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3269
Registered: January 2010
Location: Connecticut, USA
Senior Member
Right, that's why I said in "newer" version. So OP needs to clarify version.

SY.
Re: Adding column to the table which have the huge data [message #665581 is a reply to message #665580] Mon, 11 September 2017 07:33 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3269
Registered: January 2010
Location: Connecticut, USA
Senior Member
Oops, version is there 11.2.0.4.0, so certainly it will take no time.

SY.
Re: Adding column to the table which have the huge data [message #665582 is a reply to message #665581] Mon, 11 September 2017 07:59 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3269
Registered: January 2010
Location: Connecticut, USA
Senior Member
Oops, I got mixed up again. As I originally stated "unless column is defined as not null and OP is on newer version (I don't recall in what version Oracle made that enhancement)". So to clarify, initially (and I don't recall version) Oracle made enhancement and default value isn't written to existing rows ONLY IF COLUMN IS NOT NULL:

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL> create table tbl as select level col1 from dual connect by level <= 1000000;

Table created.

SQL> set timing on
SQL> alter table tbl add col2 varchar2(10) default '0123456789' not null;

Table altered.

Elapsed: 00:00:00.41
SQL> alter table tbl add col3 varchar2(10) default '0123456789';

Table altered.

Elapsed: 00:00:57.83
SQL> 

In 12C Oracle made further enhancement and default value isn't written to existing rows regardless of NULL/NOT NULL:

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE    12.1.0.2.0      Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

SQL> create table tbl as select level col1 from dual connect by level <= 1000000;

Table created.

SQL> set timing on
SQL> alter table tbl add col2 varchar2(10) default '0123456789' not null;

Table altered.

Elapsed: 00:00:00.13
SQL> alter table tbl add col3 varchar2(10) default '0123456789';

Table altered.

Elapsed: 00:00:00.05
SQL> 

So unless columns with default value OP is adding are NOT NULL it will take time to add, but 25 million rows shouldn't be too bad.

SY.
Re: Adding column to the table which have the huge data [message #665583 is a reply to message #665582] Mon, 11 September 2017 08:29 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member


Really Thank you very much for your interest to solve my problem.
Finally it is just simple alter only!

Is it good to add multiple columns in a single SQL Query ?
What else if the newly added column with default value for existing rows and nullable value for new rows


Thanks
Revathi.T
Re: Adding column to the table which have the huge data [message #665585 is a reply to message #665583] Mon, 11 September 2017 08:42 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
Just add them in one statement and be done with it.
Re: Adding column to the table which have the huge data [message #665586 is a reply to message #665585] Mon, 11 September 2017 08:55 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Thanks
Re: Adding column to the table which have the huge data [message #665592 is a reply to message #665585] Tue, 12 September 2017 02:29 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Hi Solomon/john

Thanks for your solutions. When we have only HUGE data with out partitions its working fine.
But my table is range partitioned on DATE based column .

Total Number of partitions = 60
Row count = 26 Millions Records

Indexes information

IDX1 - This index is GLOBAL PARTITION BY HASH Index on some of the columns
like branch,contract number,contract version, action no,action code, action date
IDX1 - This index is LOCAL PARTITIONED index on some of the columns
like action code, action date
IDX3 - This index is LOCAL PARTITIONED index on contract number column .


As of now its taking 5 minutes to add one column with default value to the above table .
Please let us know the efficient way adding the columns for a partitioned table.
Is there any way to reduce the time to add the column .


Thanks
Revathi .T
Re: Adding column to the table which have the huge data [message #665593 is a reply to message #665592] Tue, 12 September 2017 02:50 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
Is 5 minutes really a big deal for a one time operation?
Re: Adding column to the table which have the huge data [message #665594 is a reply to message #665593] Tue, 12 September 2017 02:52 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
And 26 million is not huge. There are systems out there where that amount is relatively small.
Re: Adding column to the table which have the huge data [message #665595 is a reply to message #665594] Tue, 12 September 2017 03:16 Go to previous messageGo to next message
Roachcoach
Messages: 1576
Registered: May 2010
Location: UK
Senior Member
cookiemonster wrote on Tue, 12 September 2017 08:52
And 26 million is not huge. There are systems out there where that amount is relatively small.
Oh yes.

 1* select * from (select num_rows from dba_tables where owner = 'APPOWNER' order by 1 desc nulls last) where rownum <=5
09:15:18 SQL> /

  NUM_ROWS
----------
9485554850
7494313980
5960284480
5958877320
4367538830

Re: Adding column to the table which have the huge data [message #665597 is a reply to message #665595] Tue, 12 September 2017 03:41 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Hi

If we are adding a single column to a single table & executing in 5 minutes is not a big deal.

But we have so many that kind of Partitioned tables(25). In each tables we need to add nearly 5 columns.
Then time to execute all those scripts is 25*5*5 = 625 Minutes(Nearly 10 hours).

That's why taking the help to find out efficient way like Multi columns Alter, PARALLEL Hint, DBMS_REDEFINITION .

Thanks
Revathi.T


Re: Adding column to the table which have the huge data [message #665598 is a reply to message #665597] Tue, 12 September 2017 03:50 Go to previous messageGo to next message
Roachcoach
Messages: 1576
Registered: May 2010
Location: UK
Senior Member
If downtime is a concern, you'll want to use redefinition because it'll take it to zero. Just check for replication impacts if you're using any of those technologies.

Parallel you'll want to check your resource use, it may not play well with others.

Multicolumn add you'll want to test, it's not something we do a lot here.

[Updated on: Tue, 12 September 2017 03:53]

Report message to a moderator

Re: Adding column to the table which have the huge data [message #665599 is a reply to message #665598] Tue, 12 September 2017 04:28 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
Adding 5 columns isn't going to take 5 times the time of one column.
Also you could alter multiple tables at the same time - you'll need a separate DB session for each.
Re: Adding column to the table which have the huge data [message #665600 is a reply to message #665599] Tue, 12 September 2017 05:04 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member

Hi cookiemonster ,

If we use the multi column alter with Parallel HINT then it will open that many sessions to add those columns.
This scenario will help us to reduce the time.
Please let me know if I am wrong.

I can try all these scenarios also,but the problem here i don't have proper environment.
Before writing the solution, I am thinking about the Ideal one .

Please help to decide the best solution .

Thanks
Revathi.T
Re: Adding column to the table which have the huge data [message #665601 is a reply to message #665600] Tue, 12 September 2017 05:06 Go to previous messageGo to next message
Roachcoach
Messages: 1576
Registered: May 2010
Location: UK
Senior Member
As always the solution depends on what problem you are solving.

Is it down time?

Is it something else?


Something "taking time" is not what people complain about, unless it affects them somehow. So what's the actual problem or issue you're trying to avoid?

[Updated on: Tue, 12 September 2017 05:07]

Report message to a moderator

Re: Adding column to the table which have the huge data [message #665602 is a reply to message #665601] Tue, 12 September 2017 05:11 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
Parallel doesn't actually open extra sessions and I don't know what affect it'll have on an alter table. It may be able to divide up the updates due to the defaults, but you would have to test it.

What I'm suggesting is you start a sqlplus session and issue one alter table. Then, while that's still running, start a new sqlplus session and issue an alter for the 2nd table, and so on. Parallel won't let you do that, and it seems the main problem for time is that you have 25 tables to alter.
Re: Adding column to the table which have the huge data [message #665604 is a reply to message #665597] Tue, 12 September 2017 05:54 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
Have you actually read any of the answers you have received? To repeat, for I think the fourth time, if you add your column with a default as not null, it will take no time at all:
orclx>
orclx> set timing on
orclx> select count(*) from t1;

  COUNT(*)
----------
   1166704

Elapsed: 00:00:00.08
orclx> alter table t1 add (newcol varchar2(10) default 'idiot' not null);

Table altered.

Elapsed: 00:00:00.01
orclx>

Furthermore, you are still saying "record" when you mean "row". Please stop doing that, it is very irritating.
Re: Adding column to the table which have the huge data [message #665605 is a reply to message #665602] Tue, 12 September 2017 06:19 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Hi cookiemonster

Thanks for your valuable advice. I will try to follow the way what you suggested.

Thanks
Revathi.T
Re: Adding column to the table which have the huge data [message #665607 is a reply to message #665592] Tue, 12 September 2017 06:56 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3269
Registered: January 2010
Location: Connecticut, USA
Senior Member
You are adding new columns, right? They are obviously not part of any index, so what this has to do with indexes? Having partitions can make adding columns faster since populating new columns with default value in existing rows can be done in parallel. And again, 25 million is not a large number of rows.

SY.
Re: Adding column to the table which have the huge data [message #665609 is a reply to message #665604] Tue, 12 September 2017 07:34 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
John - it's possible they need the columns to be nullable even with the default.
Re: Adding column to the table which have the huge data [message #665611 is a reply to message #665609] Tue, 12 September 2017 08:10 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Hi SY,

Not only adding the columns ,even if we dropping any one of the column from those partitioned tables,
its taking 5 minutes time to execute the statement .



/forum/fa/13625/0/




Thanks
Revathi.T
Re: Adding column to the table which have the huge data [message #665613 is a reply to message #665611] Tue, 12 September 2017 08:58 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
Not surprising - drop column isn't just a dictionary change - it's got to remove the existing data from disk.
Re: Adding column to the table which have the huge data [message #665614 is a reply to message #665613] Tue, 12 September 2017 09:02 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
IMO, DDL against Production tables should only be done during scheduled maintenance windows.
Re: Adding column to the table which have the huge data [message #665616 is a reply to message #665614] Tue, 12 September 2017 09:49 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
I suspect the issue is that the OP is worried that a 10 hour window will be hard to find.
Re: Adding column to the table which have the huge data [message #665618 is a reply to message #665611] Tue, 12 September 2017 10:48 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3269
Registered: January 2010
Location: Connecticut, USA
Senior Member
Do set column as unused and then drop with checkpoints.

SY.
Re: Adding column to the table which have the huge data [message #665658 is a reply to message #665618] Thu, 14 September 2017 05:18 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Hi SY/Cookiemonster ,

As you suggested dropping the columns with the help of UNUSED columns is giving the effective results.
Time to drop all columns is reduced.

ALTER TABLE HIST_CIFCONF_LOCATIONPARAM set unused(BRANCH ,UPPRESS);

ALTER TABLE HIST_TABLE DROP UNUSED COLUMNS;


Thanks a lot for your help.

But to add the COLUMN , while applying the DOP on a table,based on what parameters we need take decision?
What should be ideal DOP count ?
Shall i keep the DEFAULT DOP?


Thanks
Revathi.T
Re: Adding column to the table which have the huge data [message #665662 is a reply to message #665658] Thu, 14 September 2017 07:28 Go to previous messageGo to next message
Bill B
Messages: 1971
Registered: December 2004
Senior Member
It's actually very easy to do what you want and the new feature was made reliable in version 11.2.0.4.0 so you are all set.

ALTER TABLE MY_TABLE  ADD (TEST_COL  NUMBER(30) DEFAULT ON NULL 36 NOT NULL);

The secret is the "ON NULL" option. This will add a new not null column to a table with a default but will store the default value in the database but not in the row. so when you select from the table your new column will return (in the example above) 36 even though it is not actually in the table. The following rules apply

1) if you update the column to null, the default value is stored into the column.
2) if you insert a new row and don't specify the column it will store the default value into the row
3) if you specify the column but specify a null for the value it will store the default value
4) if you specify any other value that value will be stored in the row.

The beauty of the alter is it takes less then a second even on gigantic tables.

[Updated on: Thu, 14 September 2017 07:29]

Report message to a moderator

Re: Adding column to the table which have the huge data [message #665664 is a reply to message #665662] Thu, 14 September 2017 08:26 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Hi Bill B,

Thanks for your help .But which version gives the support for this feature .

We are using the following version only :


Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
"CORE	11.2.0.4.0	Production"
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production


But i am getting the error below :

/forum/fa/13630/0/

Thanks
Revathi T

  • Attachment: Error.PNG
    (Size: 28.57KB, Downloaded 1895 times)
Re: Adding column to the table which have the huge data [message #665665 is a reply to message #665664] Thu, 14 September 2017 08:40 Go to previous messageGo to next message
Bill B
Messages: 1971
Registered: December 2004
Senior Member
Sorry,
I think I gave you the version for 12c, try the following

ALTER TABLE MY_TABLE1 ADD (TEST_COL NUMBER(30) DEFAULT 36 NOT NULL);

in 11g it is supposed to work very fast if the not null is specified because it just stores the default value in the database. however it only does that if the NOT NULL is specified for the column.
Re: Adding column to the table which have the huge data [message #665666 is a reply to message #665665] Thu, 14 September 2017 09:21 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member

Yes, Bill B

Its taking very less time when I keep NOT null while adding a default value.
But what else if I want to add nullable column with DEFAULT value .
Do I need to go for one more alter ?
That wouldn't take much time ?

Thanks
Revathi.T
Re: Adding column to the table which have the huge data [message #665670 is a reply to message #665666] Thu, 14 September 2017 10:26 Go to previous messageGo to next message
cookiemonster
Messages: 13917
Registered: September 2008
Location: Rainy Manchester
Senior Member
If the column needs to be nullable (and are you sure that's necessary?) then there's no point making it not null and changing it back.
Oracle skips updating the column if you make it not null. If make it nullable afterwards it'll have to do the update then.
Re: Adding column to the table which have the huge data [message #665677 is a reply to message #665670] Fri, 15 September 2017 01:57 Go to previous messageGo to next message
Revathi.orafaq
Messages: 21
Registered: September 2017
Junior Member
Hi Bill B/CookieMonester,

Thanks for your solution. I am able lot of improvement in performance if the DEFAULT value is added with NOT NULL.

Any how our itself requirement is for the existing rows we need the default value and new rows we don't require any default value .

That's why after adding the DEFAULT value,again we are reverting back to the NULL.

ALTER TABLE MY_TABLE1  ADD (TEST_COL  NUMBER(30) DEFAULT 36 NOT NULL);
ALTER TABLE MY_TABLE1 MODIFY TEST_COL DEFAULT NULL;


But As you suggested adding the NOT NULL while adding the column is giving the fast results

Thanks
Revathi.T
Re: Adding column to the table which have the huge data [message #665688 is a reply to message #665677] Fri, 15 September 2017 05:19 Go to previous messageGo to next message
Solomon Yakobson
Messages: 3269
Registered: January 2010
Location: Connecticut, USA
Senior Member
Revathi.orafaq wrote on Fri, 15 September 2017 02:57

That's why after adding the DEFAULT value,again we are reverting back to the NULL.

ALTER TABLE MY_TABLE1  ADD (TEST_COL  NUMBER(30) DEFAULT 36 NOT NULL);
ALTER TABLE MY_TABLE1 MODIFY TEST_COL DEFAULT NULL;
Do you realize that you just changed DEFAULT value from 36 to NULL while you intended to change column to NULLable:

SQL> alter table tbl ADD (TEST_COL  NUMBER(30) DEFAULT 36 NOT NULL);

Table altered.

SQL> ALTER TABLE TBL MODIFY TEST_COL DEFAULT NULL;

Table altered.

SQL> insert into tbl(col1,col2) values(-1,'X');
insert into tbl(col1,col2) values(-1,'X')
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("SCOTT"."TBL"."TEST_COL")


SQL> 

And as soon as you do it Oracle again, unless you are on 12C, will update each row with default value of 36:

SQL> ALTER TABLE TBL MODIFY TEST_COL DEFAULT 36;

Table altered.

SQL> set timing on
SQL> ALTER TABLE TBL MODIFY TEST_COL NULL;

Table altered.

Elapsed: 00:01:50.86
SQL> 

SY.
Re: Adding column to the table which have the huge data [message #665689 is a reply to message #665688] Fri, 15 September 2017 05:35 Go to previous messageGo to previous message
Bill B
Messages: 1971
Registered: December 2004
Senior Member
Unless there is an absolute requirement to store a null value then set the default to a value that would indicate an unset value. For example if legal values are a positive integer set the default to -1. You get a very fast column addition with the ability to find the unset values by checking for -1
Previous Topic: PL/SQl query to build a list of interval days
Next Topic: sub totals in sql
Goto Forum:
  


Current Time: Thu Mar 28 18:57:05 CDT 2024