Hemant K Chitale

Subscribe to Hemant K Chitale feed
I am an Oracle Database Specialist in Singapore.
get an rss feed of this blog at http://hemantoracledba.blogspot.com/feeds/posts/default?alt=rss
follow me on twitter : @HemantKChitale
Updated: 5 hours 24 min ago

Services -- 4 : Using the SERVICE_NAMES parameter (non-RAC, PDB)

Mon, 2016-06-27 09:55
In my previous blog post, I have demonstrated using DBMS_SERVICE to create and start services.

But there is another way.  Using the SERVICE_NAMES parameter.  Going by most google references, SERVICE_NAME seems to be more popular than DBMS_SERVICE.

Can it be used in a Pluggable Database ?  Let' try.

[oracle@ora12102 Desktop]$ . oraenv
ORACLE_SID = [oracle] ? CDB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12102 Desktop]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 27 22:39:59 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 973082144 bytes
Database Buffers 654311424 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:43:36

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=14129))
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


SQL> connect system/oracle@PDB1
Connected.
SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> alter system set service_names='NEW_SVC_1,NEW_SVC_2';
alter system set service_names='NEW_SVC_1,NEW_SVC_2'
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database


SQL> !oerr ora 65040
65040, 00000, "operation not allowed from within a pluggable database"
// *Cause: An operation was attempted that can only be performed in the root
// container.
// *Action: Switch to the root container to perform the operation.
//

SQL>


So, apparently, ORA-65040 means that we can't define SERVICE_NAMES for a PDB.

SQL> connect sys/oracle as sysdba
Connected.
SQL> show con_id

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> alter system set service_names='NEW_SVC_1,NEW_SVC_2';

System altered.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:53:56

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=14129))
Service "NEW_SVC_1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_SVC_2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ tnsping NEW_SVC_1

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:54:57

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_SVC_1)))
OK (30 msec)
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@new_svc_1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 27 22:55:04 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: system/oracle@new_svc_1
Last Successful login time: Mon Jun 27 2016 22:44:13 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>


The new Services are actually running in the ROOT, not in the PDB .  Unlike my earlier blog post where I used the new services to connect to the PDB.

Therefore, for a PDB environment, I'd use DBMS_SERVICE as I've demonstrated here and here.
.
.
.



Categories: DBA Blogs

Services -- 3 : Monitoring Usage of Custom Services

Mon, 2016-06-20 10:04
In my previous blog post, I had demonstrated a few custom services created and started with DBMS_SERVICE.

Let's look at a couple of examples of monitoring usage of these services.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:51:08 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:23:50 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> execute dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> execute dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant select_Catalog_role to hemant;

Grant succeeded.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:52:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:28:01 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t1 tablespace hemant as select * from dba_objects;

Table created.

SQL> insert into obj_t1 select * from obj_t1;

90935 rows created.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:53:54 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Jun 20 2016 22:52:27 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t2_small tablespace hemant as select * from obj_T1 where rownum < 11;

Table created.

SQL>

SQL> show user
USER is "SYSTEM"
SQL> select sid,serial#, to_char(logon_time,'DD-MON HH24:MI:SS'), service_name
2 from v$session
3 where username = 'HEMANT'
4 order by logon_time
5 /

SID SERIAL# TO_CHAR(LOGON_TIME,'DD-M
---------- ---------- ------------------------
SERVICE_NAME
----------------------------------------------------------------
61 50587 20-JUN 22:52:27
NEW_APP1

76 43919 20-JUN 22:53:54
FINANCE


SQL>


Thus, we can see that V$SESSION tracks the SERVICE_NAME in use --- even though the USERNAME is the same in both sessions, the SERVICE_NAME is different.

SQL> col svc_name format a10
SQL> col stat_name format a25 trunc
SQL> select
2 con_id, service_name SVC_NAME, stat_name, value
3 from v$service_stats
4 where service_name in ('NEW_APP1','FINANCE')
5 and
6 (stat_name like 'DB%' or stat_name like '%block%' or stat_name like 'redo%')
7 order by 1,2,3
8 /

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 FINANCE DB CPU 168973
3 FINANCE DB time 771742
3 FINANCE db block changes 653
3 FINANCE gc cr block receive time 0
3 FINANCE gc cr blocks received 0
3 FINANCE gc current block receive 0
3 FINANCE gc current blocks receive 0
3 FINANCE redo size 100484

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 NEW_APP1 DB CPU 869867
3 NEW_APP1 DB time 17415363
3 NEW_APP1 db block changes 11101
3 NEW_APP1 gc cr block receive time 0
3 NEW_APP1 gc cr blocks received 0
3 NEW_APP1 gc current block receive 0
3 NEW_APP1 gc current blocks receive 0
3 NEW_APP1 redo size 25057520

16 rows selected.

SQL>


So, even some statistics (unfortunately, not all -- the last time I checked in 11.2) are reported at the Service Level.  Thus, I can see that the users of NEW_APP1 consumed more CPU and DB Time and generated more changes and redo than users of FINANCE !  (Obviously, V$SERVICE_STATS reports statistics from the beginning of the instance so you should either user StatsPack (I haven't verified StatsPack reporting of statistics by individual service) or AWR (if you have the Diagnostic Pack licence) or your own collection scripts to report statistics for a specific window of time).
.
.
.


Categories: DBA Blogs

Services -- 2 : Starting and Connecting to Services (non-RAC)

Thu, 2016-06-16 10:22
Continuing with the 12.1.0.2 non-RAC MultiTenant environment and two services demonstrated earlier,

I have restarted the environment today :

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 22:57:17

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$

[oracle@ora12102 Desktop]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 22:57:31 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 1040191008 bytes
Database Buffers 587202560 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database open;
alter pluggable database open
*
ERROR at line 1:
ORA-65000: missing or invalid pluggable database name


SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>

grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:00:11

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


The two manually created services (NEW_APP1 and NEW_APP2) don't startup automatically.

SQL> exec dbms_service.start_service('NEW_APP1');
BEGIN dbms_service.start_service('NEW_APP1'); END;

*
ERROR at line 1:
ORA-44773: Cannot perform requested service operation.
ORA-06512: at "SYS.DBMS_SERVICE_ERR", line 63
ORA-06512: at "SYS.DBMS_SERVICE", line 395
ORA-06512: at line 1


SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus sys/oracle@PDB1 as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:03:05 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP2');

PL/SQL procedure successfully completed.

SQL>


[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:04:36

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


When I attempted to start the custom service (that, per the previous post, was created in PDB1) when still connected to CDB$ROOT, the command fails.  I had to connect to PDB1 to start the service.

SQL> create tablespace hemant ;                           

Tablespace created.

SQL> create user hemant identified by hemant default tablespace hemant quota unlimited on hemant;

User created.

SQL> grant create session to hemant;

Grant succeeded.

SQL>
[oracle@ora12102 Desktop]$ tnsping NEW_APP1

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:00

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP1)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping NEW_APP2

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:05

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP2)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:29 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP2

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:48 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:29 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show conn_name
SP2-0158: unknown SHOW option "conn_name"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>


The newly created user HEMANT belongs to the Database.  The user is not tied to a Service.  He can use either Service (whichever is running) to connect to the Database.  But the two Services have two different TNSNAMES.ORA entries --- differing by the SERVICE_NAME specification.

Thus, the DBA could configure some application servers to use one service name and other application servers to use another service name.

An example of such a configuration is where the first set of application servers could be for Finance Applications called "FINANCE" and the second set of servers could be for HR Applications (in the *same* database) called "HR".  Here I create the two services but start only the FINANCE service.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@pdb1 

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:13:55 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.create_service('FINANCE','FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.create_service('HR','HR');

PL/SQL procedure successfully completed.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:15:45

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "FINANCE" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


Now the user should be able to connect to FINANCE, but not to HR.

[oracle@ora12102 Desktop]$ tnsping FINANCE

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:16

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = FINANCE)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping HR

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:18

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = HR)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:18:57 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:48 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@HR

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:19:23 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name:


The user could connect to FINANCE but the request to HR returned ORA-12514.
(Notice how "tnsping HR" is successful but the connection is not ?  That is because tnsping only tests if the listener is running, it does not test if the database instance and service are both running).

So, using multiple services, the DBA can "provision" the same database to multiple applications.  The entry-point is the SERVICE_NAME,  not the USERNAME.   Users and Services are independent of each other.

.
.
.
Categories: DBA Blogs

Services -- 1 : Services in non-RAC 12c MultiTenant

Tue, 2016-06-14 10:22
It is generally accepted that service definition is required in RAC environments.

However, the concept of Services was made available in 8i --- predating RAC.   Services can be defined in non-OPS / non-RAC / non-MultiTenant / MultiTenant environments.  A single PDB in a 12c MultiTenant database can host multiple services.

A quick start to the implementation.

Note : srvctl is to be used to create and manage services in a RAC environment.  srvctl registers the services with the Cluster Registry.  In a Non-RAC environment, use DBMS_SERVICE.

First, no services are running :

[grid@ora12102 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-JUN-2016 23:14:48

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 14-JUN-2016 23:14:28
Uptime 0 days 0 hr. 0 min. 19 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/ora12102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@ora12102 ~]$


Next, I startup the MultiTenant CDB database.

[oracle@ora12102 ~]$ . oraenv
ORACLE_SID = [CDB1] ? CDB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12102 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 14 23:15:47 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 1056968224 bytes
Database Buffers 570425344 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>


[grid@ora12102 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-JUN-2016 23:17:23

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 14-JUN-2016 23:14:28
Uptime 0 days 0 hr. 2 min. 54 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/ora12102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ora12102)(PORT=5501))(Security=(my_wallet_directory=/u01/app/oracle/admin/CDB1/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@ora12102 ~]$


So, now I have the default pdb1 srevice for pluggable database PDB1 running.

Next, I manually create two new services and start them.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 ~]$ sqlplus 'sys/oracle@PDB1 as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 14 23:20:30 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>
SQL> exec dbms_service.create_service(service_name=>'NEW_APP1',network_name=>'NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.create_service('NEW_APP2','NEW_APP2');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP2');

PL/SQL procedure successfully completed.

SQL>
[grid@ora12102 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-JUN-2016 23:22:54

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 14-JUN-2016 23:14:28
Uptime 0 days 0 hr. 8 min. 26 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/ora12102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ora12102)(PORT=5501))(Security=(my_wallet_directory=/u01/app/oracle/admin/CDB1/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@ora12102 ~]$



The two new services NEW_APP1 and NEW_APP2 that I created in PDB1 are now registered with the listener.  Remember that these services are in the Database Instance CDB1.

My next blog post will be about using these services.

.
.
.

Categories: DBA Blogs

Data Recovery Advisor (11g)

Mon, 2016-06-13 03:13
Here's my YouTube video on using the 11g Data Recovery Advisor from the RMAN command line.

Other videos are in my YouTube Channel.

.
.
.

Categories: DBA Blogs

Compression -- 8 : DROPping a Column of a Compressed Table

Thu, 2016-06-02 09:56
Building on the series on Compression .....

What happens if we try to DROP a column in a Compressed Table ?  How can we execute the DROP ?

Starting with BASIC Compression.


SQL> connect hemant/hemant
Connected.
SQL> create table compress_basic as select * from source_data where 1=2;

Table created.

SQL> alter table compress_basic compress;

Table altered.

SQL> insert /*+ APPEND */ into compress_basic
2 select * from source_data
3 where rownum < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_BASIC'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED BASIC

SQL> alter table compress_basic drop (object_name);
alter table compress_basic drop (object_name)
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables


SQL>
SQL> !oerr ora 39726
39726, 00000, "unsupported add/drop column operation on compressed tables"
// *Cause: An unsupported add/drop column operation for compressed table
// was attemped.
// *Action: When adding a column, do not specify a default value.
// DROP column is only supported in the form of SET UNUSED column
// (meta-data drop column).

SQL>


So, I would have to set the column to UNUSED !

SQL> alter table compress_basic set unused column object_name;

Table altered.

SQL> alter table compress_basic drop (object_name);
alter table compress_basic drop (object_name)
*
ERROR at line 1:
ORA-00904: "OBJECT_NAME": invalid identifier


SQL> alter table compress_basic drop unused columns;
alter table compress_basic drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables


SQL>


I can't drop a column from a table with Compression enabled.

Is there another way ?

SQL> alter table compress_basic move nocompress;

Table altered.

SQL> alter table compress_basic drop unused columns;

Table altered.

SQL>


To actually execute the DROP, I have to Uncompress the table !

 So : Remember : You have to be careful when designing a table that you intend to Compress.  You won't be able to DROP columns !


Repeating the test case with OLTP Compression :

SQL> create table compress_oltp as select * from source_data where 1=2;

Table created.

SQL> alter table compress_oltp compress for oltp;

Table altered.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_OLTP'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED ADVANCED

SQL>
SQL> insert into compress_oltp
2 select * from source_data
3 where rownum < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> alter table compress_oltp drop (object_name);

Table altered.

SQL>


Much easier, I could DROP the column.  But, wait.  Is there a catch ?

SQL> drop table compress_oltp purge;

Table dropped.

SQL> create table compress_oltp as select * from source_data where 1=2;

Table created.

SQL> alter table compress_oltp compress for oltp;

Table altered.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_OLTP'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED ADVANCED

SQL> alter table compress_oltp nocompress;

Table altered.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_OLTP'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
DISABLED

SQL> alter table compress_oltp drop (object_name);
alter table compress_oltp drop (object_name)
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables


SQL>


If I ALTER the table to NOCOMPRESS (which can take effect only on *new* rows, not existing rows), I cannot DROP a column.  This is because Oracle is unsure if there is a mix of Comressed and Non-Compressed rows in the table now.

What I'd have to do is to rebuild it as a NOCOMPRESS table.

SQL> alter table compress_oltp move nocompress;

Table altered.

SQL> alter table compress_oltp drop (object_name);

Table altered.

SQL>


So, once a table is set to COMPRESS OLTP and then set to NOCOMPRESS, you can't simply DROP a column.


This test-case came out of an issue a friend of mine faced today.  He found that he couldn't drop a column from a table that was formerly set to COMPRESS FOR OLTP.

I pointed him to Support Document 1288918.1

.
.
.

Categories: DBA Blogs

Restore and Recovery from Incremental Backups : Video

Mon, 2016-05-30 03:04
A Youtube Video on Restore and Recovery from Incremental Backups.


Categories: DBA Blogs

TRUNCATEing a Table makes an UNUSABLE Index VALID again

Sun, 2016-05-22 10:54
Here's something I learned from Jonathan Lewis sometime ago.

If you set an Index to be UNUSABLE but later issue a TRUNCATE TABLE, the Index becomes VALID again --- i.e. the Index gets updated with rows subsequently inserted.

SQL> connect hemant/hemant
Connected.
SQL> drop table target_data purge;

Table dropped.

SQL> create table target_data as select * from source_data where 1=2;

Table created.

SQL> create index target_data_ndx_1
2 on target_data(owner, object_type, object_name);

Index created.

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> col segment_name format a30
SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>
SQL> col index_name format a30
SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


So, I have a VALID Index on my Table.

I now make it UNUSABLE and add rows to it.

SQL> alter index target_Data_ndx_1 unusable;

Index altered.

SQL> select status
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1';

STATUS
--------
UNUSABLE

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 UNUSABLE

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 104

SQL>


Oracle actually drops the Index segment (so you don't see it in USER_SEGMENTS) when it is set to UNUSABLE alhough the Index definition is still present.  The Index doesn't "grow" as the Segment doesn't exist.

Let me TRUNCATE the table.

SQL> truncate table target_data;

Table truncated.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE .0625
TARGET_DATA_NDX_1 INDEX .0625

SQL> select index_name, status
2 from user_indexes
3 where table_name = 'TARGET_DATA';

INDEX_NAME STATUS
------------------------------ --------
TARGET_DATA_NDX_1 VALID

SQL>


Immediately after the TRUNCATE TABLE, the Index Segment is instantiated and the Index becomes VALID again.  So inserting rows will update the Index.  My last explicit command against the Index was ALTER INDEX ... UNUSABLE but that seems to be not the current state now !

SQL> insert /*+ APPEND */ into target_data
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name like 'TARGET_DATA%'
4 order by 1;

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
TARGET_DATA TABLE 49
TARGET_DATA_NDX_1 INDEX 19

SQL>


So, repopulating the Table has expanded the Index again.
.
.
.


Categories: DBA Blogs

Partition Storage -- 8 : Manually Sizing Partitions

Wed, 2016-05-18 10:44
As demonstrated in the first post in this series, the  default size for Table Partitions is to start with an I nitial8MB Extent.  The fourth post demonstrated how to resize a Partition that has already been created and populated.

Here is a method to pre-size the Initial Extent. I start with 3 partitions of different sizes.

SQL> create table my_part_tbl_init_sized (id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101) segment creation immediate
4 storage (initial 64K next 64K) tablespace hemant,
5 partition p_200 values less than (201) segment creation immediate
6 storage (initial 1M next 1M) tablespace hemant,
7 partition p_max values less than (maxvalue) segment creation immediate
8 storage (initial 8M next 1M) tablespace hemant)
9 /

Table created.

SQL>
SQL> select partition_name, initial_extent, next_extent, tablespace_name
2 from user_tab_partitions
3 where table_name = upper('my_part_tbl_init_sized')
4 order by partition_position
5 /

PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME
------------ -------------- ----------- ------------------------------
P_100 65536 65536 HEMANT
P_200 1048576 1048576 HEMANT
P_MAX 8388608 1048576 HEMANT

SQL>
SQL> select partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name = upper('my_part_tbl_init_sized')
4 and segment_type = 'TABLE PARTITION'
5 order by 1
6 /

PARTITION_NA BYTES/1024 EXTENTS
------------ ---------- ----------
P_100 64 1
P_200 1024 1
P_MAX 8192 1

SQL>


However, the caveat about Partition Extent Sizes when SPLIT is used as shown in the third post still applies.

SQL> alter table my_part_tbl_init_sized
2 split partition p_max
3 at (301)
4 into (partition p_300, partition p_max)
5 /

Table altered.

SQL>
SQL> select partition_name, initial_extent, next_extent, tablespace_name
2 from user_tab_partitions
3 where table_name = upper('my_part_tbl_init_sized')
4 order by partition_position
5 /

PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME
------------ -------------- ----------- ------------------------------
P_100 65536 65536 HEMANT
P_200 1048576 1048576 HEMANT
P_300 8388608 1048576 HEMANT
P_MAX 8388608 1048576 HEMANT

SQL>
SQL> select partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name = upper('my_part_tbl_init_sized')
4 and segment_type = 'TABLE PARTITION'
5 order by 1
6 /

PARTITION_NA BYTES/1024 EXTENTS
------------ ---------- ----------
P_100 64 1
P_200 1024 1
P_300 8192 1
P_MAX 8192 1

SQL>


So, SPLITting P_MAX to create P_300 created P_300 with an 8MB extent because the sizing is inherited from P_MAX.  How do I change the Partition Extent sizes on SPLIT ?

SQL> alter table my_part_tbl_init_sized
2 split partition p_max
3 at (501)
4 into (partition p_500 storage (initial 64K next 64K) tablespace hemant, partition p_max)
5 /

Table altered.

SQL>
SQL> alter table my_part_tbl_init_sized
2 split partition p_500
3 at (401)
4 into (partition p_400, partition p_500)
5 /

Table altered.

SQL>
SQL> select partition_name, initial_extent, next_extent, tablespace_name
2 from user_tab_partitions
3 where table_name = upper('my_part_tbl_init_sized')
4 order by partition_position
5 /

PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME
------------ -------------- ----------- ------------------------------
P_100 65536 65536 HEMANT
P_200 1048576 1048576 HEMANT
P_300 8388608 1048576 HEMANT
P_400 65536 65536 HEMANT
P_500 65536 65536 HEMANT
P_MAX 8388608 1048576 HEMANT

6 rows selected.

SQL> select partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name = upper('my_part_tbl_init_sized')
4 and segment_type = 'TABLE PARTITION'
5 order by 1
6 /

PARTITION_NA BYTES/1024 EXTENTS
------------ ---------- ----------
P_100 64 1
P_200 1024 1
P_300 8192 1
P_400 64 1
P_500 64 1
P_MAX 8192 1

6 rows selected.

SQL>


This time, when I split P_MAX I deliberately specified a size for P_500.  Then, when I split P_500 further, the new P_400 inherited the sizing from P_500.
,
,
,

Categories: DBA Blogs

Compression -- 7 : Updating after BASIC Compression

Mon, 2016-05-09 09:57
In the first blog post on compression, I had shown BASIC Compression and also the effect of executing an UPDATE on a table with BASIC compression.

To expand on the them of UPDATEs of BASIC compression blocks ....

SQL> select count(*) from source_data;

COUNT(*)
----------
367156

SQL> create table target_comp row store compress basic as select * from source_data where 1=2;

Table created.

SQL> select pct_free from user_tables where table_name = 'TARGET_COMP';

PCT_FREE
----------
0

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> analyze table target_comp compute statistics;

Table analyzed.

SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

CHAIN_CNT BLOCKS
---------- ----------
0 4452

SQL>
SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP');

PL/SQL procedure successfully completed.

SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

NUM_ROWS CHAIN_CNT BLOCKS
---------- ---------- ----------
1101468 0 4452

SQL>


So we have a table with 1.1million rows and no Row Chaining.

What happens if we update about 20% of the rows ?

SQL> begin
2 for rec in (select rowid from target_comp where rownum < 220001)
3 loop
4 update target_comp set owner=owner where rowid=rec.rowid;
5 commit;
6 end loop;
7 end;
8 /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL> analyze table target_comp compute statistics;

Table analyzed.

SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

CHAIN_CNT BLOCKS
---------- ----------
202189 7882

SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP');

PL/SQL procedure successfully completed.

SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

NUM_ROWS CHAIN_CNT BLOCKS
---------- ---------- ----------
1101468 202189 7882

SQL>


I have updated 220,000 rows without actually increasing the notional length of each row (I set OWNER=OWNER).  Yet, The CHAIN_CNT is now 202K and the table's HighWaterMark has expanded from 4,452 blocks to 7,882 blocks.  A significant increase !
(YMMV may vary in your tests !)

It was Jonathan Lewis who suggested getting the Chain Count (or LIST CHAINED ROWS) to understand the impact of UPDATEs on a table with BASIC compression.
.
.
.


Categories: DBA Blogs

Compression -- 6b : Advanced Index Compression (revisited)

Thu, 2016-05-05 09:09
Following up on my earlier post on 12.1.0.2 Advanced Index Compression, one of my readers asked what would be the difference if I reversed the order of columns in the chosen index.

My defined index was on (OWNER, OBJECT_TYPE, OBJECT_NAME) --- defined as being from the column with the fewest distinct values to the most.  This ordering is best compressible with Index Key Compression (also known as Prefix Compression).  If I reverse the order, Index Key Compression for the two leading columns wouldn't deliver the same level of compression.  The question is whether Advanced Index Compression can intelligently handle the reversal.

SQL> create index target_data_ndx_3_comp on
2 target_data(object_name, object_type, owner) compress 2;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_COMP'
4 /

LEAF_BLOCKS
-----------
3091

SQL>


Surprisingly, this index with Prefix 2 on (OBJECT_NAME, OBJECT_TYPE) is, at 3,091 leaf blocks, smaller than the previous  index with Prefix 2 on (OWNER, OBJECT_TYPE) at 5,508 leaf blocks.

Continuing with Prefix 3

SQL> drop index target_data_ndx_3_comp;

Index dropped.

SQL> create index target_data_ndx_3_comp on
2 target_data(object_name, object_type, owner) compress 3;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_COMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


At 2,277 leaf blocks it is, as expected, the same size with Prefix 3 on (OWNER, OBJECT_TYPE, OBJECT_NAME).  Since the entire index key is specified as the Prefix, both indexes would be the same size.

Going on to Advanced Index Compression

SQL> drop index target_data_ndx_3_comp;

Index dropped.

SQL> create index target_data_ndx_4_advcomp on
2 target_data(object_name, object_type, owner)
3 compress advanced low
4 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_4_ADVCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_4_ADVCOMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


This is, again, as expected.  Advanced Index Compression results in the same size irrespective of the ordering of the columns.

The advantage of Advanced Index Compression over Key or Prefix Compression is that the DBA does not need to determine the Prefix for compression.  He does not have to spend time to analyze the data and compare the number of distinct values for each of the columns in the composite index.
.
.
.

Categories: DBA Blogs

Compression -- 6 : Advanced Index Compression

Tue, 2016-05-03 09:23
Earlier, I had covered Index (Key) Compression which is included in the Enterprise Edition.

In Key Compression, the DBA must specify the Prefix (i.e. number of leading columns in a composite index) that must be used as the compression key.

12.1.0.2 Advanced Index Compression does not require the DBA to manually identify the prefix key length.  Advanced Index Compression auto(magically) identifies the optimal prefix columns.
(Note : Advanced Index Compression requires the Advanced Compression licence Option and is available only in 12.1.0.2 and higher)

SQL> create table target_data as select * from source_data where 1=2;

Table created.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL>


Creating a Key Compression Index by specifying the Prefix size (the leading 2 columns) :

SQL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 2;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /

LEAF_BLOCKS
-----------
5508

SQL>


Note how I specified "2" as the Prefix size as I want to compress on repeated values of (OWNER, OBJECT_NAME).

Using Advanced Index Compression by specifying "COMPRESS ADVANCED LOW" and letting Oracle decide on the compression strategy in each leaf block :

SQL> drop index target_data_ndx_1_comp;

Index dropped.

SQL> create index target_data_ndx_2_advcomp on
2 target_data (owner, object_type, object_name)
3 compress advanced low;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_2_ADVCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_2_ADVCOMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


Wow, that's significantly smaller.  What's more, I did not have to spend time analyzing the data and the index definition to identify the "correct" Prefix size.

However, it is now possible to specify the entire composite key as the Prefix, although that is not what I would have done in earlier versions.  Identifying the Prefix size requires analyzing the data.

SQL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 3
3 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


So, the Advanced option allows me to let Oracle automatically decide the appropriate mechanism to compress the Index keys.

UPDATE : Also see the subsequent test with a reversal of the columns in the composite index.

Just for comparison, here is a regular index :

SQL> drop index target_data_ndx_2_advcomp;

Index dropped.

SQL> create index target_data_ndx_3_nocomp on
2 target_data (owner, object_type, object_name)
3 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_NOCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_NOCOMP'
4 /

LEAF_BLOCKS
-----------
7289

SQL>


That is a much larger regular index !
.
.
.

Categories: DBA Blogs

FBDA -- 7 : Maintaining Partitioned Source Table

Mon, 2016-05-02 02:46
Taking up the TEST_FBDA_PARTITIONED table,  let's look at a couple of Partition Maintenance operations.

SQL> select partition_name, high_value, num_rows
2 from user_tab_partitions
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 order by partition_position
5 /

PARTITION_NAME HIGH_VALUE NUM_ROWS
---------------- ------------------------- ----------
P_100 101 100
P_200 201 100
P_300 301 100
P_400 401 100
P_MAX MAXVALUE 301

SQL>


Let's try a TRUNCATE PARTITION

SQL> alter table test_fbda_partitioned truncate partition p_100;

Table truncated.

SQL>


So, that's supported.

Let's try a SPLIT PARTTIION

SQL> alter table test_fbda_partitioned       
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /
alter table test_fbda_partitioned
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table


SQL>


So, a SPLIT PARTITION fails.  We need to DISASSOCIATE the Flashback Archive.

SQL> execute dbms_flashback_archive.disassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 DISASSOCIATED


SQL>
SQL> alter table test_fbda_partitioned
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /

Table altered.

SQL> execute dbms_flashback_archive.reassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL>
SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 ENABLED


SQL>


While a Table is disassociated with it's Flashback Archive, DDL that would not normally be permitted may be done under strict control to ensure that there is no data divergence.

.
.
.
Categories: DBA Blogs

Partition Storage -- 7 : Revisiting HWM - 2 (again)

Mon, 2016-05-02 02:19
Revisiting the previous test case, but with a larger AVG_ROW_LEN

SQL> create table part_table_large
(id_column number(6), data_col_1 varchar2(100), data_col_2 varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/
2 3 4 5 6 7 8 9

Table created.

SQL>
SQL> insert into part_table_large values
(51,rpad('String',75,'X'), rpad('Another',60,'Y'))
2 3
SQL> /

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_large
values (25, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (45, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (55, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22349

SQL>
SQL>
SQL> alter table part_table_large move partition p_100 ;

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22626

SQL>
SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_LARGE'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/
2 3 4 5 6 7 8
EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024
3 1024
4 1024
5 1024
6 1024
7 1024
8 1024
9 1024
10 1024
11 512
12 1024
13 1024
14 1024
15 1024
16 1024
17 1024
18 1024
19 1024
20 1024
21 1024
22 1024

23 rows selected.

SQL>


Aha ! Unlike the previous case (where an AVG_ROW_LEN of 11, a MOVE reduced the HWM from 3,022 to 2,484), with a larger row size, the HWM has moved from 22,349 to 22,626.

So, space consumption is a factor of both the AVG_ROW_LEN and the manner in which the rows are  inserted / relocated.

SQL> l
1 select avg_row_len*num_rows*1.2/8192 Expected_Blocks, Blocks
2 from user_tab_partitions
3 where table_name = 'PART_TABLE_LARGE'
4* and partition_name = 'P_100'
SQL> /

EXPECTED_BLOCKS BLOCKS
--------------- ----------
22558.6553 22626

SQL>

Also, see how the "Expected Blocks" count seems more accurate than earlier.
.
.
.



Categories: DBA Blogs

Partition Storage -- 6 : Revisiting Partition HWM

Fri, 2016-04-29 10:42
After the curious finding in my previous blog post, where a Partition's HighWaterMark was noticeably higher than that for a non-Partitioned Table but then shrunk on a MOVE operation, retrying the same rows with a different pattern of INSERT statements.

However, I am still sticking to a single session doing the INSERT (as I don't want ASSM spreading the incoming rows to different non-contiguous blocks)

This in 12.1.0.2
SQL> connect hemant/hemant
Connected.
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100' 2 3 4
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, a Row-By-Row Insert still resulted in the the HWM being 3,022 and shrinking to 2,484 after a MOVE.



Let's try the same data-set in 11.2.0.4
SQL> connect hemant/hemant
Connected.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL>
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/

2 3 4 5 6 7 8
Table created.

SQL> SQL> show parameter deferr

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, 11.2.0.4 and 12.1.0.2 display the same behaviour for the Partition HWM.  A HWM of 3,022 blocks shrinking to 2,484 blocks.

The next test would be with a larger AVG_ROW_LEN.
.
.
.


Categories: DBA Blogs

Partition Storage -- 5 : Partitioned Table versus Non-Partitioned Table ? (in 12.1)

Mon, 2016-04-25 10:13
Reviewing my second blog post in this series, I found it strange that Partition P_100 (populated by Serial Inserts of 1 row, 100,000 rows, 500,000 rows and 500,000 rows) had such a High Water Mark.

For 1.1million rows of an Average Row Length of 11, the High Water Mark was 3,022 blocks.

In the fourth blog post, a simple ALTER TABLE MOVE PARTITION had brought the High Water Mark to 2,482 blocks !

This needs further investigation.

Let's compare a single Partition of a Partitioned Table with a Non-Partitioned Table for exactly the same data and same pattern of INSERT statements.

Starting with a new Partitioned Table.

SQL> l
1 create table new_part_tbl (id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7* partition p_max values less than (maxvalue))
SQL> /

Table created.

SQL>
SQL> insert into new_part_tbl values (51,'Fifty One');

1 row created.

SQL>
SQL> insert into new_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into new_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 3022

SQL>
SQL> REM Let's MOVE the Partition
SQL> alter table new_part_tbl move partition P_100;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2484

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NEW_PART_TBL'
4 and segment_type = 'TABLE PARTITION'
5 and partition_name = 'P_100'
6 and owner = 'HEMANT'
7* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


As expected (see the first blog post), the Extents are still 8MB each.  But the High Water Mark has "magicallly" shrunk from 3,022 blocks to 2,484 blocks.

Let's create a Non-Partitioned Table with the same columns and rows.

SQL> create table non_part_tbl (id_column number(6), data_column varchar2(100));

Table created.

SQL> insert into non_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into non_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into non_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NON_PART_TBL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> REM Let's MOVE the Table
SQL> alter table non_part_tbl move;

Table altered.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NON_PART_TBL'
4 and segment_type = 'TABLE'
5 and owner = 'HEMANT'
6* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 8
1 8
2 8
3 8
4 8
5 8
6 8
7 8
8 8
9 8
10 8
11 8
12 8
13 8
14 8
15 8
16 128
17 128
18 128
19 128
20 128
21 128
22 128
23 128
24 128
25 128
26 128
27 128
28 128
29 128
30 128
31 128
32 128
33 128
34 128

35 rows selected.

SQL>


The Non-Partitioned Table had a High Water Mark of 2,512 blocks.  This did not change with a MOVE.  The allocation of Extents is also expected in AutoAllocate.

Why, then, does the Partition behave differently ?  It started with a High Water Mark of 3,022 blocks which shrunk to 2,484 blocks after a MOVE ?

Is the Average Row Length or the actual data a factor ?  (Note : I am *not* using Table Compression).

To be explored further with a larger row size ...........

Possibly, to be explored with a different pattern of INSERT statements  ......

Possibly to be compared in 11.2 as well. ......
.
.
.

Categories: DBA Blogs

Partition Storage -- 4 : Resizing Partitions

Sun, 2016-04-24 10:38
Building on Posts 2 (Adding Rows) and 3 (Adding Partitions) where we saw Table Partitions using 8MB Extents ..... is there a way to "resize" Partitions to smaller Extents (and, maybe, lesser space consumed) without using Compression ?

Let's explore.

Beginning with Partitions P_100 and P_200 ....

SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 24576 3
MY_PART_TBL P_200 32768 4
MY_PART_TBL_NDX P_100 28672 43
MY_PART_TBL_NDX P_200 33792 48

SQL>
SQL> alter table my_part_tbl move partition p_100 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_100 storage (initial 64K next 64K)
2 /

Index altered.

SQL> alter table my_part_tbl move partition p_200 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_200 storage (initial 64K next 64K)
2 /

Index altered.

SQL>
SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34

SQL>
SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 3022 1100001
P_200 3668 1100001

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 2482 1100001
P_200 2639 1100001

SQL>
SQL>
SQL> l
1 select partition_name, blocks, count(*)
2 from dba_extents
3 where owner = 'HEMANT'
4 and segment_name = 'MY_PART_TBL'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_100','P_200')
7 group by partition_name, blocks
8* order by 1,2
SQL> /

PARTITION_NA BLOCKS COUNT(*)
------------ ---------- ----------
P_100 8 16
P_100 128 19
P_200 8 16
P_200 128 20

SQL>


Partition P_100 has shrunk from 3 extents of 8MB adding up to 24,576KB to 35 extents adding up to 20,480KB. The High Water Mark has shrink from 3,022 blocks to 2,482 blocks (Remember : P_100 was populated with a Serial Insert.  Partition P_200 that had been populated with Parallel (DoP=4) insert has also shrunk from 32,768KB to 21,504KB and the High Water Mark from 3,668 blocks to 2,639 blocks.  The Extents are a combinaion of 64KB (the first 16, adding up to 1MB) and 1MB sizes.
Even the Index Partitions seem to have shrunk.

So, a MOVE/REBUILD (the REBUILD of the Index Partitons was required because I did a Partition MOVE without UPDATE INDEXES), could be used to shrink the Partitions with newer, smaller, Extents allocated.

But what about the case of SPLIT Partition, where Partitions SPLIT from an 8MB Partition resulted in 2 8MB Partitions, even for empty Partitions.

Here's a workaround.  Before SPLITting the P_MAX Partition, I resize it.

SQL> alter table my_part_tbl move partition p_max storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_max storage (initial 64K next 64K);

Index altered.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (1001)
4 into (partition p_1000, partition p_max)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_1000
3 at (901)
4 into (partition p_900, partition p_1000)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_900
3 at (801)
4 into (partition p_800, partition p_900)
5 /

Table altered.

SQL>
SQL> l
1 select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL>
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL P_300 8192 1
MY_PART_TBL P_400 8192 1
MY_PART_TBL P_600 8192 1
MY_PART_TBL P_680 8192 1
MY_PART_TBL P_700 8192 1
MY_PART_TBL P_800 64 1
MY_PART_TBL P_900 64 1
MY_PART_TBL P_1000 64 1
MY_PART_TBL P_MAX 64 1
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34
MY_PART_TBL_NDX P_300 64 1
MY_PART_TBL_NDX P_400 64 1
MY_PART_TBL_NDX P_600 64 1
MY_PART_TBL_NDX P_680 64 1
MY_PART_TBL_NDX P_700 64 1
MY_PART_TBL_NDX P_800 64 1
MY_PART_TBL_NDX P_900 64 1
MY_PART_TBL_NDX P_1000 64 1
MY_PART_TBL_NDX P_MAX 64 1

22 rows selected.

SQL>


(Note : I have manually relocated Partition P_1000 in the listing).
Partitions P_600, P_680 and P_700 had been created by SPLIT PARTITION commands in the previous post, beginning with segment-created P_MAX partition.  However, after rebuilding P_MAX to 64KB Extents, subsequently SPLITted Partitions (P_800 to P_1000) are also 64KB.

Note : I am not advising that all have to Partitions be 64K.  (Observe how AutoAllocate did allocate 1MB Extents to P_100 and P_200 after the first 1MB of space usage (using 16 64KB Extents).
.
.
.


Categories: DBA Blogs

Partition Storage -- 3 : Adding new Range Partitions with SPLIT

Sat, 2016-04-23 10:04
Building on the Partitioned Table in the previous two blog posts...

We know that the Table is a Range Partitioned Table.  With a MAXVALUE Partition, the only way to add new Partitions is to use the SPLIT PARTITION command.

First, let's review the Table, Partitions and Segments.

SQL> select table_name, num_rows
2 from user_tables
3 where table_name = 'MY_PART_TBL'
4 /

TABLE_NAME NUM_ROWS
---------------- ----------
MY_PART_TBL 2200004

SQL> select partition_name, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1
5 /

PARTITION_NA NUM_ROWS BLOCKS
------------ ---------- ----------
P_100 1100001 3022
P_200 1100001 3668
P_300 1 1006
P_400 1 1006
P_MAX 0 0

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the table has 5 partitions P_100 to P_MAX but only 4 have segments created after one or more rows have been populated.  P_MAX has no segment created for either the Table Partition or the Index Partition.

What happens if we SPLIT P_MAX (an empty, segmentless Partition) to create a new Partition ?

SQL> alter table my_part_tbl
2 split partition p_max
3 at (501)
4 into (partition p_500, partition p_max)
5 /

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, high_value, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE NUM_ROWS BLOCKS
------------ ---------------- ---------- ----------
P_100 101 1100001 3022
P_200 201 1100001 3668
P_300 301 1 1006
P_400 401 1 1006
P_500 501 0 0
P_MAX MAXVALUE 0 0

6 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the process of creating Partition P_500 did not create a segment for it, because P_MAX which it was SPLIT from, was segmentless.  What happens if I split a Partition with 1 or more rows ?

SQL> insert into my_part_tbl
2 select 550, 'Five Hundred Fifty'
3 from dual
4 /

1 row created.

SQL> commit;
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

10 rows selected.

SQL>
SQL> alter table my_part_tbl
2 split partition p_max
3 at (601)
4 into (partition p_600, partition p_max)
5 /

Table altered.

SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

12 rows selected.

SQL>


So, the row for ID_COLUMN=550 created the segment for Partition P_MAX. Subsequently, SPLITting this Partition into P_600 and P_MAX resulted into two Partitions of 8MB each.
The row for ID_COLUMN=550 would be in the P_600 Partition and the P_MAX Partition would now be the empty Partition.  Yet, even P_MAX now takes an 8MB extent, unlike earlier.

Let's try doing such a SPLIT that, say P_700 is created empty but P_MAX inherits the row.

SQL> insert into my_part_tbl
2 select 900, 'Nine Hundred'
3 from dual
4 /

1 row created.

SQL> commit;

Commit complete.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (701)
4 into (partition p_700, partition p_max)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

14 rows selected.

SQL> select count(*) from my_part_tbl partition (P_700);

COUNT(*)
----------
0

SQL>


Again, both Partitions (P_700 and P_MAX) have a segment of 8MB.
This means that, once a Segment for a Partition is created, any SPLIT of that Partition results into two Segments inheriting the same 8MB Extent Size, irrespective of the fact that one of the two may be empty.

SQL> alter table my_part_tbl
2 split partition p_700
3 at (681)
4 into (partition p_680, partition p_700)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


That is confirmation that SPLITting a Partition that has a segment (even if it is empty) results into two segmented partitions, even if both are empty.

Going back to Parttion P_500 (which is present but segmentless), what happens if we split it ?

SQL> alter table my_part_tbl
2 split partition p_500
3 at (451)
4 into (partition p_450, partition p_500)
5 /

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE
------------ ----------------
P_100 101
P_200 201
P_300 301
P_400 401
P_450 451
P_500 501
P_600 601
P_680 681
P_700 701
P_MAX MAXVALUE

10 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


Splitting segmentless Partition P_500 into P_450 and P_500 did *not* result into new Segments.

 This has implications for your SPLIT Partition strategy.  If you need to do a recursive split to create, say, 90 1-day Partitions and you start with a Partition that has a segment (even if empty), you get 90 new segments as well.  Thus, the table would suddenly "grow" by 720MB without having inserted a single row on the day you create these 90 Partitions.  You may get some questions from IT Operations / Support about the sudden "growth" in 1 day.
On the other hand, starting with a segmentess Partition, you get 90 new segmentless Partitions.  Their segments will be created when they are populated.
.
.

.
Categories: DBA Blogs

Pages