Skip navigation.

Hemant K Chitale

Syndicate content
I am an Oracle Database Specialist in Singapore.
get an rss feed of this blog at http://hemantoracledba.blogspot.com/feeds/posts/default?alt=rss
follow me on twitter : @HemantKChitale
Updated: 7 hours 28 min ago

Partition Storage -- 6 : Revisiting Partition HWM

Fri, 2016-04-29 09:42
After the curious finding in my previous blog post, where a Partition's HighWaterMark was noticeably higher than that for a non-Partitioned Table but then shrunk on a MOVE operation, retrying the same rows with a different pattern of INSERT statements.

However, I am still sticking to a single session doing the INSERT (as I don't want ASSM spreading the incoming rows to different non-contiguous blocks)

This in 12.1.0.2
SQL> connect hemant/hemant
Connected.
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100' 2 3 4
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, a Row-By-Row Insert still resulted in the the HWM being 3,022 and shrinking to 2,484 after a MOVE.



Let's try the same data-set in 11.2.0.4
SQL> connect hemant/hemant
Connected.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL>
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/

2 3 4 5 6 7 8
Table created.

SQL> SQL> show parameter deferr

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, 11.2.0.4 and 12.1.0.2 display the same behaviour for the Partition HWM.  A HWM of 3,022 blocks shrinking to 2,484 blocks.

The next test would be with a larger AVG_ROW_LEN.
.
.
.


Categories: DBA Blogs

Partition Storage -- 5 : Partitioned Table versus Non-Partitioned Table ? (in 12.1)

Mon, 2016-04-25 09:13
Reviewing my second blog post in this series, I found it strange that Partition P_100 (populated by Serial Inserts of 1 row, 100,000 rows, 500,000 rows and 500,000 rows) had such a High Water Mark.

For 1.1million rows of an Average Row Length of 11, the High Water Mark was 3,022 blocks.

In the fourth blog post, a simple ALTER TABLE MOVE PARTITION had brought the High Water Mark to 2,482 blocks !

This needs further investigation.

Let's compare a single Partition of a Partitioned Table with a Non-Partitioned Table for exactly the same data and same pattern of INSERT statements.

Starting with a new Partitioned Table.

SQL> l
1 create table new_part_tbl (id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7* partition p_max values less than (maxvalue))
SQL> /

Table created.

SQL>
SQL> insert into new_part_tbl values (51,'Fifty One');

1 row created.

SQL>
SQL> insert into new_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into new_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 3022

SQL>
SQL> REM Let's MOVE the Partition
SQL> alter table new_part_tbl move partition P_100;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2484

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NEW_PART_TBL'
4 and segment_type = 'TABLE PARTITION'
5 and partition_name = 'P_100'
6 and owner = 'HEMANT'
7* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


As expected (see the first blog post), the Extents are still 8MB each.  But the High Water Mark has "magicallly" shrunk from 3,022 blocks to 2,484 blocks.

Let's create a Non-Partitioned Table with the same columns and rows.

SQL> create table non_part_tbl (id_column number(6), data_column varchar2(100));

Table created.

SQL> insert into non_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into non_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into non_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NON_PART_TBL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> REM Let's MOVE the Table
SQL> alter table non_part_tbl move;

Table altered.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NON_PART_TBL'
4 and segment_type = 'TABLE'
5 and owner = 'HEMANT'
6* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 8
1 8
2 8
3 8
4 8
5 8
6 8
7 8
8 8
9 8
10 8
11 8
12 8
13 8
14 8
15 8
16 128
17 128
18 128
19 128
20 128
21 128
22 128
23 128
24 128
25 128
26 128
27 128
28 128
29 128
30 128
31 128
32 128
33 128
34 128

35 rows selected.

SQL>


The Non-Partitioned Table had a High Water Mark of 2,512 blocks.  This did not change with a MOVE.  The allocation of Extents is also expected in AutoAllocate.

Why, then, does the Partition behave differently ?  It started with a High Water Mark of 3,022 blocks which shrunk to 2,484 blocks after a MOVE ?

Is the Average Row Length or the actual data a factor ?  (Note : I am *not* using Table Compression).

To be explored further with a larger row size ...........

Possibly, to be explored with a different pattern of INSERT statements  ......

Possibly to be compared in 11.2 as well. ......
.
.
.

Categories: DBA Blogs

Partition Storage -- 4 : Resizing Partitions

Sun, 2016-04-24 09:38
Building on Posts 2 (Adding Rows) and 3 (Adding Partitions) where we saw Table Partitions using 8MB Extents ..... is there a way to "resize" Partitions to smaller Extents (and, maybe, lesser space consumed) without using Compression ?

Let's explore.

Beginning with Partitions P_100 and P_200 ....

SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 24576 3
MY_PART_TBL P_200 32768 4
MY_PART_TBL_NDX P_100 28672 43
MY_PART_TBL_NDX P_200 33792 48

SQL>
SQL> alter table my_part_tbl move partition p_100 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_100 storage (initial 64K next 64K)
2 /

Index altered.

SQL> alter table my_part_tbl move partition p_200 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_200 storage (initial 64K next 64K)
2 /

Index altered.

SQL>
SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34

SQL>
SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 3022 1100001
P_200 3668 1100001

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 2482 1100001
P_200 2639 1100001

SQL>
SQL>
SQL> l
1 select partition_name, blocks, count(*)
2 from dba_extents
3 where owner = 'HEMANT'
4 and segment_name = 'MY_PART_TBL'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_100','P_200')
7 group by partition_name, blocks
8* order by 1,2
SQL> /

PARTITION_NA BLOCKS COUNT(*)
------------ ---------- ----------
P_100 8 16
P_100 128 19
P_200 8 16
P_200 128 20

SQL>


Partition P_100 has shrunk from 3 extents of 8MB adding up to 24,576KB to 35 extents adding up to 20,480KB. The High Water Mark has shrink from 3,022 blocks to 2,482 blocks (Remember : P_100 was populated with a Serial Insert.  Partition P_200 that had been populated with Parallel (DoP=4) insert has also shrunk from 32,768KB to 21,504KB and the High Water Mark from 3,668 blocks to 2,639 blocks.  The Extents are a combinaion of 64KB (the first 16, adding up to 1MB) and 1MB sizes.
Even the Index Partitions seem to have shrunk.

So, a MOVE/REBUILD (the REBUILD of the Index Partitons was required because I did a Partition MOVE without UPDATE INDEXES), could be used to shrink the Partitions with newer, smaller, Extents allocated.

But what about the case of SPLIT Partition, where Partitions SPLIT from an 8MB Partition resulted in 2 8MB Partitions, even for empty Partitions.

Here's a workaround.  Before SPLITting the P_MAX Partition, I resize it.

SQL> alter table my_part_tbl move partition p_max storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_max storage (initial 64K next 64K);

Index altered.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (1001)
4 into (partition p_1000, partition p_max)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_1000
3 at (901)
4 into (partition p_900, partition p_1000)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_900
3 at (801)
4 into (partition p_800, partition p_900)
5 /

Table altered.

SQL>
SQL> l
1 select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL>
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL P_300 8192 1
MY_PART_TBL P_400 8192 1
MY_PART_TBL P_600 8192 1
MY_PART_TBL P_680 8192 1
MY_PART_TBL P_700 8192 1
MY_PART_TBL P_800 64 1
MY_PART_TBL P_900 64 1
MY_PART_TBL P_1000 64 1
MY_PART_TBL P_MAX 64 1
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34
MY_PART_TBL_NDX P_300 64 1
MY_PART_TBL_NDX P_400 64 1
MY_PART_TBL_NDX P_600 64 1
MY_PART_TBL_NDX P_680 64 1
MY_PART_TBL_NDX P_700 64 1
MY_PART_TBL_NDX P_800 64 1
MY_PART_TBL_NDX P_900 64 1
MY_PART_TBL_NDX P_1000 64 1
MY_PART_TBL_NDX P_MAX 64 1

22 rows selected.

SQL>


(Note : I have manually relocated Partition P_1000 in the listing).
Partitions P_600, P_680 and P_700 had been created by SPLIT PARTITION commands in the previous post, beginning with segment-created P_MAX partition.  However, after rebuilding P_MAX to 64KB Extents, subsequently SPLITted Partitions (P_800 to P_1000) are also 64KB.

Note : I am not advising that all have to Partitions be 64K.  (Observe how AutoAllocate did allocate 1MB Extents to P_100 and P_200 after the first 1MB of space usage (using 16 64KB Extents).
.
.
.


Categories: DBA Blogs

Partition Storage -- 3 : Adding new Range Partitions with SPLIT

Sat, 2016-04-23 09:04
Building on the Partitioned Table in the previous two blog posts...

We know that the Table is a Range Partitioned Table.  With a MAXVALUE Partition, the only way to add new Partitions is to use the SPLIT PARTITION command.

First, let's review the Table, Partitions and Segments.

SQL> select table_name, num_rows
2 from user_tables
3 where table_name = 'MY_PART_TBL'
4 /

TABLE_NAME NUM_ROWS
---------------- ----------
MY_PART_TBL 2200004

SQL> select partition_name, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1
5 /

PARTITION_NA NUM_ROWS BLOCKS
------------ ---------- ----------
P_100 1100001 3022
P_200 1100001 3668
P_300 1 1006
P_400 1 1006
P_MAX 0 0

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the table has 5 partitions P_100 to P_MAX but only 4 have segments created after one or more rows have been populated.  P_MAX has no segment created for either the Table Partition or the Index Partition.

What happens if we SPLIT P_MAX (an empty, segmentless Partition) to create a new Partition ?

SQL> alter table my_part_tbl
2 split partition p_max
3 at (501)
4 into (partition p_500, partition p_max)
5 /

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, high_value, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE NUM_ROWS BLOCKS
------------ ---------------- ---------- ----------
P_100 101 1100001 3022
P_200 201 1100001 3668
P_300 301 1 1006
P_400 401 1 1006
P_500 501 0 0
P_MAX MAXVALUE 0 0

6 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the process of creating Partition P_500 did not create a segment for it, because P_MAX which it was SPLIT from, was segmentless.  What happens if I split a Partition with 1 or more rows ?

SQL> insert into my_part_tbl
2 select 550, 'Five Hundred Fifty'
3 from dual
4 /

1 row created.

SQL> commit;
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

10 rows selected.

SQL>
SQL> alter table my_part_tbl
2 split partition p_max
3 at (601)
4 into (partition p_600, partition p_max)
5 /

Table altered.

SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

12 rows selected.

SQL>


So, the row for ID_COLUMN=550 created the segment for Partition P_MAX. Subsequently, SPLITting this Partition into P_600 and P_MAX resulted into two Partitions of 8MB each.
The row for ID_COLUMN=550 would be in the P_600 Partition and the P_MAX Partition would now be the empty Partition.  Yet, even P_MAX now takes an 8MB extent, unlike earlier.

Let's try doing such a SPLIT that, say P_700 is created empty but P_MAX inherits the row.

SQL> insert into my_part_tbl
2 select 900, 'Nine Hundred'
3 from dual
4 /

1 row created.

SQL> commit;

Commit complete.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (701)
4 into (partition p_700, partition p_max)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

14 rows selected.

SQL> select count(*) from my_part_tbl partition (P_700);

COUNT(*)
----------
0

SQL>


Again, both Partitions (P_700 and P_MAX) have a segment of 8MB.
This means that, once a Segment for a Partition is created, any SPLIT of that Partition results into two Segments inheriting the same 8MB Extent Size, irrespective of the fact that one of the two may be empty.

SQL> alter table my_part_tbl
2 split partition p_700
3 at (681)
4 into (partition p_680, partition p_700)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


That is confirmation that SPLITting a Partition that has a segment (even if it is empty) results into two segmented partitions, even if both are empty.

Going back to Parttion P_500 (which is present but segmentless), what happens if we split it ?

SQL> alter table my_part_tbl
2 split partition p_500
3 at (451)
4 into (partition p_450, partition p_500)
5 /

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE
------------ ----------------
P_100 101
P_200 201
P_300 301
P_400 401
P_450 451
P_500 501
P_600 601
P_680 681
P_700 701
P_MAX MAXVALUE

10 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


Splitting segmentless Partition P_500 into P_450 and P_500 did *not* result into new Segments.

 This has implications for your SPLIT Partition strategy.  If you need to do a recursive split to create, say, 90 1-day Partitions and you start with a Partition that has a segment (even if empty), you get 90 new segments as well.  Thus, the table would suddenly "grow" by 720MB without having inserted a single row on the day you create these 90 Partitions.  You may get some questions from IT Operations / Support about the sudden "growth" in 1 day.
On the other hand, starting with a segmentess Partition, you get 90 new segmentless Partitions.  Their segments will be created when they are populated.
.
.

.
Categories: DBA Blogs

Partition Storage -- 2 : New Rows Inserted in 12.1 Partitioned Table

Wed, 2016-04-20 09:41
Following up from the previous post,  given the current sizes of the Partitions (The underlying Tablespace is AUTOALLOCATE)  :

SQL> col segment_name format a30
SQL> col partition_name format a12
SQL> l
1 select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL> /

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 64

8 rows selected.

SQL>


I shall now insert rows so that a Partition has to grow beyond the first extent.

SQL> insert into my_part_tbl                      
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001;

100000 rows created.

SQL> insert into my_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001;

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 24576
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 28672
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 64

8 rows selected.

SQL>


So, now, the first Partition P_100 has grown to exceed 24MB. Let's check the Extent Sizes.

SQL> select extent_id, blocks, blocks*8192/1024 Size_KB
2 from user_extents
3 where segment_name = 'MY_PART_TBL'
4 and segment_type = 'TABLE PARTITION'
5 and partition_name = 'P_100'
6 order by 1;

EXTENT_ID BLOCKS SIZE_KB
---------- ---------- ----------
0 1024 8192
1 1024 8192
2 1024 8192

SQL>
SQL> l
1 select extent_id, blocks, blocks*8192/1024 Size_KB
2 from user_extents
3 where segment_name = 'MY_PART_TBL_NDX'
4 and segment_type = 'INDEX PARTITION'
5 and partition_name = 'P_100'
6* order by 1
SQL> /

EXTENT_ID BLOCKS SIZE_KB
---------- ---------- ----------
0 8 64
1 8 64
2 8 64
3 8 64
4 8 64
5 8 64
6 8 64
7 8 64
8 8 64
9 8 64
10 8 64
11 8 64
12 8 64
13 8 64
14 8 64
15 8 64
16 128 1024
17 128 1024
18 128 1024
19 128 1024
20 128 1024
21 128 1024
22 128 1024
23 128 1024
24 128 1024
25 128 1024
26 128 1024
27 128 1024
28 128 1024
29 128 1024
30 128 1024
31 128 1024
32 128 1024
33 128 1024
34 128 1024
35 128 1024
36 128 1024
37 128 1024
38 128 1024
39 128 1024
40 128 1024
41 128 1024
42 128 1024

43 rows selected.

SQL>


So, the Table Partition grew as 8MB Extents but the Index Partition used the normal AutoAllocate routine of 64KB Extents for the first 1MB (16 Extents) followed by 1MB Extents.

What might have been the expected size of the Table Partition ?   ** see the UPDATE at the end of this post.

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, trunc(1.2*avg_row_len*num_rows/1024) Expected_KB
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1
5 /

PARTITION_NA EXPECTED_KB
------------ -----------
P_100 14179
P_200 0
P_300 0
P_400 0
P_MAX 0

SQL>


I use a 20% factor to account for PCTFREE and block/row overheads.  **  see the UPDATE at the end of  this post.

Next, what if I use Parallel Insert ?

SQL> alter table my_part_tbl parallel 4;

Table altered.

SQL> alter session enable parallel dml ;

Session altered.

SQL> insert /*+ PARALLEL (MANUAL) */
2 into my_part_tbl
3 select 125, 'New Row'
4 from dual
5 connect by level < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ PARALLEL (MANUAL) */
2 into my_part_tbl
3 select 125, 'New Row'
4 from dual
5 connect by level < 500001;

500000 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ PARALLEL (MANUAL) */
2 into my_part_tbl
3 select 125, 'New Row'
4 from dual
5 connect by level < 500001;

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> l
1 select segment_name, partition_name, segment_type, extents, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL> /

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE EXTENTS BYTES/1024
------------------------- ------------ ------------------ ---------- ----------
MY_PART_TBL P_100 TABLE PARTITION 3 24576
MY_PART_TBL P_200 TABLE PARTITION 4 32768
MY_PART_TBL P_300 TABLE PARTITION 1 8192
MY_PART_TBL P_400 TABLE PARTITION 1 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 43 28672
MY_PART_TBL_NDX P_200 INDEX PARTITION 48 33792
MY_PART_TBL_NDX P_300 INDEX PARTITION 1 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 1 64

8 rows selected.

SQL>


Note the size of P_200.  Parallel Insert actually allocated more extents because each PX server process created at least 1 extent for the rows it inserted (Note : For a smaller row-count it is not necessary that each PX server would have inserted rows, some may not have). So, Parallel Insert can result in more allocated space.

** UPDATE :  What about the "expected size" computation based on AVG_ROW_LEN x NUM_ROWS ?

SQL> select partition_name, avg_row_len, num_rows, blocks, trunc(1.2*avg_row_len*num_rows/8192) Expected_Blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1;

PARTITION_NA AVG_ROW_LEN NUM_ROWS BLOCKS EXPECTED_BLOCKS
------------ ----------- ---------- ---------- ---------------
P_100 11 1100001 3022 1772
P_200 12 1100001 3668 1933
P_300 26 1 1006 0
P_400 28 1 1006 0
P_MAX 0 0 0 0

SQL>


This shows how any"formula" based on AVG_ROW_LEN and NUM_ROWS can be inaccurate in forecasting the expected size (blocks used).  Also note that P_100 was populated with a Serial Insert while P_200 was populated with a Parallel (Degree=4) Insert.
.
.
.



Categories: DBA Blogs

Partition Storage -- 1 : Default Partition Sizes in 12c

Tue, 2016-04-19 09:17
11g 11.2.0.2 introduced a change whereby the default Initial Extent of a Table Partition was 8MB.  However, this did not apply to Index Partitions which could still start with 64KB extents in an AutoAllocate Tablespace.

12cR1 now introduces a parameter to enable large Initial Extent for Index Partitions as well.

SQL> connect / as sysdba                          
Connected.
SQL> select banner from v$version where banner like 'Oracle Database%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> show parameter deferred

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> connect hemant/hemant
Connected.
SQL> create table my_part_tbl (id_column number(6), data_column varchar2(100))
2 partition by range(id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> create index my_part_tbl_ndx on my_part_tbl(id_column) local;

Index created.

SQL>
SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

no rows selected

SQL> 
SQL> insert into my_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into my_part_tbl values (151,'One Hundred Fifty One');

1 row created.

SQL> insert into my_part_tbl values (251, 'Two Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64

6 rows selected.

SQL>


I can enable large Index Partition Extent with a parameter (which can be set with ALTER SESSION)


SQL> alter session set "_index_partition_large_extents"=TRUE;

Session altered.

SQL> insert into my_part_tbl values (351,'Three Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 8192

8 rows selected.

SQL>


However, I can rebuild the Index Partition Extent as well :

SQL> alter index my_part_tbl_ndx rebuild partition p_400 storage (initial 64K);

Index altered.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 64

8 rows selected.

SQL>


In the next post, we'll see more Extents for the Partitions.
,
,
,
Categories: DBA Blogs

Online Relocation of Database File : ASM to FileSystem and FileSystem to ASM

Sun, 2016-04-17 09:44
There have been few published examples of the online datafile relocation feature in 12c.  The examples I've seen are on filesystem.

Here I show online relocation to/from ASM and FileSystem.

SQL> connect system/oracle
Connected.
SQL> create tablespace test_relocate;

Tablespace created.

SQL> create table test_relocate_tbl
2 tablespace test_relocate
3 as select * from dba_objects;

Table created.

SQL> select tablespace_name, bytes/1024
2 from user_segments
3 where segment_name = 'TEST_RELOCATE_TBL';

TABLESPACE_NAME BYTES/1024
------------------------------ ----------
TEST_RELOCATE 13312

SQL> select file_name, bytes/1024
2 from dba_data_files
3 where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909444793
102400


SQL>
SQL> alter database move datafile
2 '+DATA/NONCDB/DATAFILE/test_relocate.260.909444793'
3 to '/oradata/NONCDB/test_relocate_01.dbf';

Database altered.

SQL> !ls -l /oradata/NONCDB
total 102408
-rw-r----- 1 oracle asmdba 104865792 Apr 17 23:39 test_relocate_01.dbf

SQL> 
SQL> alter database move datafile  
2 '/oradata/NONCDB/test_relocate_01.dbf'
3 to '+DATA';

Database altered.

SQL> select file_name, bytes/1024
2 from dba_data_files
3 where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909445261
102400


SQL>
SQL> !ls -l /oradata/NONCDB
total 0

SQL>


Note that I was courageous enough to not use the KEEP keyword (which is optional !).
.
.
.

Categories: DBA Blogs

FBDA -- 6 : Some Bug Notes

Sun, 2016-04-10 09:27
Some MoS documents on FBDA Bugs

1.  Bug 16454223  :  Wrong Results  (more rows than expected)

2.  Bug 16898135  :  FBDA does not split partitions  (resulting in rows not being purged)

3.  Bug 18294320  :   ORA-01555 (ORA-2475) on SMON_SCN_TIME

4.  Bug 22456983  :   Limit on SMON_SCN_TIME affecting FBDA

5.  Document 2039070.1 :  Known Issues with Flashback Data Archive
.
.
.




Categories: DBA Blogs

Recent Blog Series on Compression

Sun, 2016-04-10 09:16
These have been my recent posts on Compression.

1.  1.  BASIC Table Compression  (Feb-16)

2.  1b.  More on BASIC Table Compression  (Feb-16)

3.  2.  Compressed Table Partitions  (Mar-16)

4.  3.  Index (Key) Compression  (Mar-16)

4.  4.  RMAN (BASIC) Compression  (Mar-16)

5.  5.  OLTP Compression  (Mar-16)
.
.

.
Categories: DBA Blogs

FBDA -- 5 : Testing AutoPurging

Sun, 2016-04-10 09:06
Tracking data changes after one row added (ID_COLUMN=2000) on 06-Apr

SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
06-APR-16 10.53.20.328132 PM +08:00

SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4 order by 1,2;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
COUNT(*)
----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
450

02-APR-16 11.32.55.000000000 PM
03-APR-16 11.45.24.000000000 PM
550

02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM
5

02-APR-16 11.46.11.000000000 PM
03-APR-16 11.45.24.000000000 PM
445

03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM
5

03-APR-16 11.45.24.000000000 PM
04-APR-16 11.05.33.000000000 PM
1000

06-APR-16 10.40.43.000000000 PM
06-APR-16 10.42.54.000000000 PM
1


7 rows selected.

SQL>
SQL> select count(*) from sys_fba_tcrv_93250;

COUNT(*)
----------
1002

SQL>


More changes on 07-Apr


SQL> insert into test_fbda
2 select 3000, to_char(3000), trunc(sysdate)
3 from dual;

1 row created.

SQL> commit;

Commit complete.

SQL> update test_fbda
2 set date_inserted=date_inserted
3 where id_column=3000;

1 row updated.

SQL> delete test_fbda
2 where id_column < 1001 ;

1000 rows deleted.

SQL> commit;

Commit complete.

SQL>
SQL> l
1 select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4* order by 1,2,3
SQL> /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000

10 rows selected.

SQL>
SQL> l
1 select id_column, trunc(date_inserted), count(*)
2 from test_fbda
3 group by id_column, trunc(date_inserted)
4* order by 1
SQL> /

ID_COLUMN TRUNC(DAT COUNT(*)
---------- --------- ----------
2000 06-APR-16 1
3000 07-APR-16 1

SQL>


I see two new 1000 row sets (04-Apr and 07-Apr).  I should expect only one.

Now that rows for ID_COLUMN less than 1001 have been deleted on 07-Apr, we have to see if and when they get purged from the History table.


On 09-Apr:

SQL> insert into test_fbda
2 select 4000, to_char(4000),trunc(sysdate)
3 from dual;

1 row created.

SQL> commit;

Commit complete.

SQL> update test_fbda
2 set date_inserted=date_inserted
3 where id_column=4000;

1 row updated.

SQL> commit;

Commit complete.

SQL> l
1 select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4* order by 1,2,3
SQL> /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL>
SQL> select * from user_flashback_archive
2 /

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
1 3
02-APR-16 11.24.39.000000000 PM
02-APR-16 11.24.39.000000000 PM



SQL>


As on the morning of 10-Apr (after leaving the database instance running overnight) :

SQL> select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4 order by 1,2,3
5 /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL> select systimestamp from dual
2 /

SYSTIMESTAMP
---------------------------------------------------------------------------
10-APR-16 08.51.29.398107 AM +08:00

SQL>
SQL> select * from user_flashback_archive
2 /

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
1 3
02-APR-16 11.24.39.000000000 PM
02-APR-16 11.24.39.000000000 PM



SQL>


So auto-purge of the data as of earlier days (02-Apr to 06-Apr) hasn't yet kicked in ?  Let's try a manual purge.

SQL> alter flashback archive fbda purge before timestamp (sysdate-4);

Flashback archive altered.

SQL> select * from user_flashback_archive;

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
1 3

05-APR-16 11.52.16.000000000 PM



SQL>
SQL> ! sleep 300
SQL> l
1 select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4* order by 1,2,3
SQL> /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL>


Although USER_FLASHBACK_ARCHIVE shows that a purge till 05-Apr (the 11:52pm timestamp is strange) has been done, I still see older rows in the History table.  The query on the active table does correctly exclude the rows that should not be available. 


SQL> select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4 order by 1,2,3;

STARTTIME ENDTIME COUNT(*)
---------------------------------- ---------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL> select * from user_flashback_archive;

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
------------------------------------------------------------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS CREATE_TIME
------------------ ----------------- ---------------------------------------------------------------------------
LAST_PURGE_TIME STATUS
--------------------------------------------------------------------------- -------
SYSTEM
FBDA
1 3
05-APR-16 11.52.16.000000000 PM


SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
10-APR-16 10.52.12.361412 PM +08:00

SQL> select count(*) from test_fbda as of timestamp (sysdate-3);

COUNT(*)
----------
2

SQL>
SQL> select partition_position, high_value
2 from user_tab_partitions
3 where table_name = 'SYS_FBA_HIST_93250'
4 order by 1;

PARTITION_POSITION HIGH_VALUE
------------------ --------------------------------------------------------------------------------
1 MAXVALUE

SQL>



Support Document 16898135.1 states that if Oracle isn't maintaining partitions for the History table, it may not be purging data properly.  Even an ALTER FLASHBACK ARCHIVE ... PURGE doesn't purge data (unless PURGE ALL is issued).  I'd seen this behaviour in 11.2.0.4 . The bug is supposed to have been fixed in 12.1.0.2  but my 12.1.0.2 environment shows the same behaviour.   The fact that my test database has very little activity (very few SCNs being incremented) shouldn't matter. The "Workaround" is, of course, unacceptable.
.
.
.
Categories: DBA Blogs

FBDA -- 4 : Partitions and Indexes

Tue, 2016-04-05 09:47
Continuing our investigation of the FBDA architecture.

oracle@ora12102 Desktop]$ sqlplus hemant/hemant

SQL*Plus: Release 12.1.0.2.0 Production on Tue Apr 5 23:25:10 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Tue Apr 05 2016 23:23:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select table_name, def_tablespace_name, partitioning_type, partition_count, status
2 from user_part_tables
3 order by 1;

TABLE_NAME
--------------------------------------------------------------------------------
DEF_TABLESPACE_NAME PARTITION PARTITION_COUNT STATUS
------------------------------ --------- --------------- --------
SYS_FBA_HIST_93250
FBDA RANGE 1 VALID


SQL>
SQL> set pages600
SQL> set long 10000
SQL> select dbms_metadata.get_ddl('TABLE','SYS_FBA_HIST_93250') from dual;

DBMS_METADATA.GET_DDL('TABLE','SYS_FBA_HIST_93250')
--------------------------------------------------------------------------------

CREATE TABLE "HEMANT"."SYS_FBA_HIST_93250"
( "RID" VARCHAR2(4000),
"STARTSCN" NUMBER,
"ENDSCN" NUMBER,
"XID" RAW(8),
"OPERATION" VARCHAR2(1),
"ID_COLUMN" NUMBER,
"DATA_COLUMN" VARCHAR2(15),
"DATE_INSERTED" DATE,
"D_1729869_NEW_COL_1" VARCHAR2(5)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"
PARTITION BY RANGE ("ENDSCN")
(PARTITION "HIGH_PART" VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA" )


SQL>


So, although my active table (TEST_FBDA) is not partitioned, the History table is Range Partitioned on the ENDSCN column, with a single partition !   All the rows in the History table are in the MAXVALUE partition.  I wonder if and when it will ever be split ?

What if I create a partitioned table and then define Flashback Archiving on it ?

SQL> create table test_fbda_partitioned (
2 id_column number,
3 data_column varchar2(15),
4 date_inserted date)
5 partition by range (id_column)
6 (partition p_100 values less than (101),
7 partition p_200 values less than (201),
8 partition p_300 values less than (301),
9 partition p_400 values less than (401),
10 partition p_max values less than (MAXVALUE))
11 /

Table created.

SQL> alter table test_fbda_partitioned flashback archive fbda;

Table altered.

SQL> insert into test_fbda_partitioned
2 select rownum, to_char(rownum), trunc(sysdate)
3 from dual connect by level < 701;

700 rows created.

SQL> commit;

Commit complete.

SQL> update test_fbda_partitioned
2 set data_column=data_column;

700 rows updated.

SQL> commit;

Commit complete.

SQL> col subobject_name format a15
SQL> select object_type, subobject_name, object_id
2 from user_objects
3 where object_name = 'TEST_FBDA_PARTITIONED'
4 order by 3,1;

OBJECT_TYPE SUBOBJECT_NAME OBJECT_ID
----------------------- --------------- ----------
TABLE 93342
TABLE PARTITION P_100 93343
TABLE PARTITION P_200 93344
TABLE PARTITION P_300 93345
TABLE PARTITION P_400 93346
TABLE PARTITION P_MAX 93347

6 rows selected.

SQL>
SQL> select table_name
2 from user_tables
3 where table_name like '%93342%'
4 order by 1;

TABLE_NAME
--------------------------------------------------------------------------------
SYS_FBA_DDL_COLMAP_93342
SYS_FBA_HIST_93342
SYS_FBA_TCRV_93342

SQL>
SQL> select dbms_metadata.get_ddl('TABLE','SYS_FBA_HIST_93342') from dual;

DBMS_METADATA.GET_DDL('TABLE','SYS_FBA_HIST_93342')
--------------------------------------------------------------------------------

CREATE TABLE "HEMANT"."SYS_FBA_HIST_93342"
( "RID" VARCHAR2(4000),
"STARTSCN" NUMBER,
"ENDSCN" NUMBER,
"XID" RAW(8),
"OPERATION" VARCHAR2(1),
"ID_COLUMN" NUMBER,
"DATA_COLUMN" VARCHAR2(15),
"DATE_INSERTED" DATE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"
PARTITION BY RANGE ("ENDSCN")
(PARTITION "HIGH_PART" VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA" )


SQL>


So, even though my active table (TEST_FBDA_PARTITIONED) is created as a Range Partitioned Table partitioned on ID_COLUMN, the corresponding History table is Range Partitioned on ENDSCN with a single MAXVALUE partition.
Therefore, Oracle ignores my partitioning definition when creating the History table.  The History table (seems to be ?) always Range Partitioned on ENDSCN and starts using the MAXVALUE partition up-front.  (When will this Partition be split ?  I could search MoS for Docs / Bugs, but I'll do that exercise later).

Now that we know that the History table doesn't use our Partition Key, we must wonder about Partition Pruning when running AS OF queries on the active table that need to access the History Table.  If we can't Partition Prune, can we Index the History table ?

SQL> select table_name                   
2 from user_tables
3 where table_name like 'SYS_FBA_HIST%'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
SYS_FBA_HIST_93250
SYS_FBA_HIST_93342

SQL> create index sys_fba_hist_93250_ndx_1 on sys_fba_hist_93250(id_column) tablespace fbda;

Index created.

SQL> create index sys_fba_hist_93342_ndx_1 on sys_fba_hist_93342(id_column) tablespace fbda;

Index created.

SQL> select dbms_metadata.get_ddl('INDEX','SYS_FBA_HIST_93250_NDX_1') from dual;

DBMS_METADATA.GET_DDL('INDEX','SYS_FBA_HIST_93250_NDX_1')
--------------------------------------------------------------------------------

CREATE INDEX "HEMANT"."SYS_FBA_HIST_93250_NDX_1" ON "HEMANT"."SYS_FBA_HIST_932
50" ("ID_COLUMN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"


SQL> select dbms_metadata.get_ddl('INDEX','SYS_FBA_HIST_93342_NDX_1') from dual;

DBMS_METADATA.GET_DDL('INDEX','SYS_FBA_HIST_93342_NDX_1')
--------------------------------------------------------------------------------

CREATE INDEX "HEMANT"."SYS_FBA_HIST_93342_NDX_1" ON "HEMANT"."SYS_FBA_HIST_933
42" ("ID_COLUMN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"


SQL>


Yes !  Oracle does allow us to build custom indexes on the History tables.  (I believe I saw this somewhere in the documentation or a note).

Next Post (possibly) : More DML operations and whether auto-purging based on the defined RETENTION 3 DAY period.  (Note : The Retention Period is defined at the Flashback Archive level, not at the individual table (active/History) level.  If you need to have tables with different Retention Periods, you need to define different Flashback Archives, although they can all be in the same Tablespace).
.
.
.

Categories: DBA Blogs

FBDA -- 3 : Support for TRUNCATEs

Mon, 2016-04-04 09:13
One of the points in the previous post was how ADD and DROP Column commands are supported in FBDA.

Let's look at support for TRUNCATEs.

(Just to prove that I am not using the Undo tablespace for the AS OF query, I drop the Undo tablespace) :

SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> show parameter spfile

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +DATA/NONCDB/PARAMETERFILE/spf
ile.267.896483727
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> create undo tablespace undotbs2;

Tablespace created.

SQL> alter system set undo_tablespace='UNDOTBS2';

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> drop tablespace undotbs1 including contents and datafiles;

Tablespace dropped.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS2
SQL>


I now TRUNCATE my test table and then run an AS OF query against it.

SQL> connect hemant/hemant
Connected.
SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> truncate table test_fbda;

Table truncated.

SQL> select count(*) from test_fbda;

COUNT(*)
----------
0

SQL> select count(*) from test_fbda as of timestamp (sysdate-1);

COUNT(*)
----------
1000

SQL>


The AS OF query was satisfied by the FBDA History table  (and not from Undo).

However, because of yesterday's test of ADD and DROP columns (see yesterday's post), the table structure doesn't match AS OF yesterday.  So, I work around the difference by naming the columns.

SQL> insert into test_fbda select * from test_fbda as of timestamp (sysdate-1);
insert into test_fbda select * from test_fbda as of timestamp (sysdate-1)
*
ERROR at line 1:
ORA-00913: too many values


SQL>
SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> insert into test_fbda
2 select id_column, data_column, date_inserted
3 from test_fbda as of timestamp (sysdate-1);

1000 rows created.

SQL> commit;

Commit complete.

SQL>


Thus, when my query matches the table structure, I can use the data from the previous version of the query.

Note how I did NOT use the SYS_FBA_HIST_93250 table as I'd shown in my previous posts.  The History table is NOT to be accessed directly but I demonstrated queries against it it to show the underlying architecture in FBDA.
.
.
.



Categories: DBA Blogs

FBDA -- 2 : FBDA Archive Table Structure

Sun, 2016-04-03 10:10
Following up on my earlier post, I look at the FBDA Archive Tables.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 3 23:26:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:32:30 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL>
SQL> desc sys_fba_hist_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> desc sys_fba_ddl_colmap_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
COLUMN_NAME VARCHAR2(255)
TYPE VARCHAR2(255)
HISTORICAL_COLUMN_NAME VARCHAR2(255)

SQL> desc sys_fba_tcrv_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OP VARCHAR2(1)

SQL>


The HIST table is the History table for my active table. It adds columns that track Rowid, Start SCB and End SCN for a range of rows that are copied into the History Table, Transaction Identifier, Operation and then the actual columns of the active table.
The DDL_COLMAP table seems to track Column Mappings.  See example below.
The TCRV table seems to be tracking Transactions ?

Let's try some DDL to ADD and DROP columns to the active table.

SQL> alter table test_fbda add (new_col_1 varchar2(5));

Table altered.

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
NEW_COL_1 VARCHAR2(5)

SQL> desc sys_fba_93250
ERROR:
ORA-04043: object sys_fba_93250 does not exist


SQL> desc sys_fba_hist_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
NEW_COL_1 VARCHAR2(5)

SQL>
SQL> select * from sys_fba_ddl_colmap_93250
2 /

STARTSCN ENDSCN XID O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
1697151
ID_COLUMN
NUMBER
ID_COLUMN

1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

1697151
DATE_INSERTED
DATE
DATE_INSERTED

1728713
NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL>


The new column added to the active table is also now reflected in the History Table.  The DDL_COLMAP shows the effective start of each column (notice the STARTSCN mapped to the COLUMN_NAME)

Let's set some values in new column and see if they appear in the History Table.

SQL> update test_fbda set new_col_1 = 'New'
2 where id_column < 6;

5 rows updated.

SQL> commit;

Commit complete.

SQL> select id_column, new_col_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
2 from sys_fba_hist_93250
3 where id_column < 6
4 order by 1,3;

ID_COLUMN NEW_C
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM


SQL>


What rows are copied into the History Table are *prior* image rows (copied from the Undo Area).  The STARTSCN and ENDSCN are of *yesterday* (02-April).

Let me DROP the new column.

SQL> alter table test_fbda drop (new_col_1);

Table altered.

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> desc sys_fba_hist_93250;
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
D_1729869_NEW_COL_1 VARCHAR2(5)

SQL>
SQL> select * from sys_fba_ddl_colmap_93250;

STARTSCN ENDSCN XID O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
1697151
ID_COLUMN
NUMBER
ID_COLUMN

1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

1697151
DATE_INSERTED
DATE
DATE_INSERTED

1728713 1729869
D_1729869_NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL>


The dropped column is no longer in the active table  and has been renamed in the History table.  (The data in the column has to be preserved but the column is renamed).  Notice how the DDL_COLMAP table now shows an ENDSCN for this column, with the new (renamed)  column as in the History table.  The column name seems to include the SCN (ENDSCN ?)

Let's confirm what data is now present in the History table  (remember : Our earlier query showed the pre-update image for this column).

SQL> select id_column, D_1729869_NEW_COL_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
2 from sys_fba_hist_93250
3 where (id_column < 6 OR D_1729869_NEW_COL_1 is not null)
4 order by 1,3;

ID_COLUMN D_172
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

1
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

1 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

2
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

2 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

3
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

3 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

4
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

4 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

5
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

5 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM


15 rows selected.

SQL>
SQL> select scn_to_timestamp(1729869) from dual;

SCN_TO_TIMESTAMP(1729869)
---------------------------------------------------------------------------
03-APR-16 11.45.27.000000000 PM

SQL>


Why do we now have 3 rows in the History table for each row in the Active Table ?  Take ID_COLUMN=1.  The first row -- for the time range 02-Apr 11:32pm to 02-Apr 11:46pm--  is as of yesterday, the same row we saw in the History table after the update in the active table.  The second row is the representation to preserve the row for the time rang 02-Apr 11:46pm to 03-Apr 11:41pm to support AS OF queries upto the time of the UPDATE.  The third row for the time range 03-Apr 11:41pm to 03-Apr 11:45pm is to present the UPDATEd value ('New') in the column upto the last transaction updating it before the column was dropped at 03-Apr 11:45:27pm.

Thus, Oracle maintains multiple versions of the same row, including versions for DROPped columns, in the History Table.

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.  The proper query against the active table would be an AS OF query which is automatically rewritten / redirected to "hit" the History table when necessary.

What about the third table table -- the TCRV table ?

SQL> l
1 select scn_to_timestamp(startscn), op , count(*)
2 from sys_fba_tcrv_93250
3 group by scn_to_timestamp(startscn), op
4* order by 2,1
SQL> /

SCN_TO_TIMESTAMP(STARTSCN) O
--------------------------------------------------------------------------- -
COUNT(*)
----------
03-APR-16 11.45.24.000000000 PM U
1000


SQL>
SQL> select count(distinct(rid)) from sys_fba_tcrv_93250;

COUNT(DISTINCT(RID))
--------------------
1000

SQL>



It shows 1000 rows has having been UPDATEd ? (Does OP='U' mean 'UPDATE). We do know that ADD and DROP column are changes to the table.  But are they UPDATEs ?

Next post : Continuing with DML operations (more rows, some updates).  We'll see if we can decipher anything rom the TCRV table as well. Changed to showing support for TRUNCATEs.
.
.
.



Categories: DBA Blogs

FBDA -- 1 : Testing Flashback Data Archive in 12c (NonCDB)

Sat, 2016-04-02 09:53
Note : At the bottom of this post, you'll find links to more (subsequent) posts on this topic.

Some testing I'd done with Flashback Data Archive (henceforth called FBDA in this and subsequent posts, if any) in 11.2.0.4 left me with uncertainty about the automatic purging of data beyond the Retention Period.  I might return to testing 11.2.0.4, but here I shall begin testing in 12.1.0.2  (NonCDB).

Setting up FBDA :

[oracle@ora12102 ~]$ sqlplus system/oracle

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 2 23:23:53 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:20:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create tablespace fbda ;

Tablespace created.

SQL> create flashback archive fbda tablespace fbda retention 3 day;

Flashback archive created.

SQL> create tablespace hemant;

Tablespace created.

SQL> create user hemant identified by hemant
2 default tablespace hemant;

User created.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant create session to hemant;

Grant succeeded.

SQL> alter user hemant quota unlimited on hemant;

User altered.

SQL> alter user hemant quota unlimited on fbda;

User altered.

SQL> grant flashback archive administer to hemant;

Grant succeeded.

SQL> grant flashback archive on fbda to hemant;

Grant succeeded.

SQL>
SQL> connect / as sysdba
Connected.
SQL> grant execute on dbms_flashback_archive to hemant;

Grant succeeded.

SQL>
SQL> connect hemant/hemant
Connected.
SQL> create table test_fbda (id_column number, data_column varchar2(15), date_inserted date) tablespace hemant;

Table created.

SQL> alter table test_fbda flashback archive fbda;

Table altered.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL>


Note the Flashback Archive history table corresponding to TEST_FBDA doesn't get created immediately.

SQL> connect hemant/hemant
Connected.
SQL> insert into test_fbda
2 select rownum , to_char(rownum), trunc(sysdate)
3 from dual connect by level < 1001;

1000 rows created.

SQL> commit;

Commit complete.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> select flashback_archive_name, retention_in_days, status
2 from user_flashback_archive;

FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
RETENTION_IN_DAYS STATUS
----------------- -------
FBDA
3


SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables;

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA
FBDA
SYS_FBA_HIST_93250 ENABLED


SQL>
SQL> !sleep 300

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL>
SQL> select object_id
2 from user_objects
3 where object_name = 'TEST_FBDA'
4 and object_type = 'TABLE'
5
SQL> /

OBJECT_ID
----------
93250

SQL>


So, it took some time for the flashback archive history table (identified on the basis of the OBJECT_ID) to appear.  The background fbda process seems to run (wakeup) every 5minutes although it may wake up more frequently if there is more activity in the database.

SQL> select trunc(date_inserted), count(*)
2 from test_fbda
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 1000

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL> update test_fbda
2 set data_column = data_column
3 where rownum < 451;

450 rows updated.

SQL> commit;

Commit complete.

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL>
SQL> !sleep 180

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 450

SQL>
SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted
4 order by 1;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
DATE_INSE COUNT(*)
--------- ----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
02-APR-16 450


SQL>


Notice that not all 1000 rows got copied to the FBDA.  Only the 450 rows that I updated were copied in.  They are tracked by SCN-Timestamp.  (The "DATE_INSERTED" column is my own date column, Oracle wouldn't be using that column to track DML dates for rows as the values in that column are controlled by me -- the application or developer, not Oracle).

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.

Tomorrow :  More Rows, and some DDLs as well.

Post 2 here.  It covers some of the architectural components and support for ADD / DROP column DDL.

Post 3 here.  It shows support for TRUNCATEs.

Post 4 here.  On Partitions and Indexes.

Post 5 here.  On (Auto)Purging.

Post 6 here.  On Bug Notes
.
.
.


Categories: DBA Blogs

Now an OCP 12c

Sun, 2016-03-27 09:59
I have upgraded my Oracle Database Certified Professional status from 11g to 12c, having passed the 1Z0-060 Exam last Monday.

This is the list of material I used :

1.  Documentation :  Oracle 12c Database Documentation.  Not just the New Features Guide but a large number of pages spread throughout the documentation.  I went through *selected* chapters / pages of the Administrator's Guide, Reference Guide, PLSQL Package Guide, Developers Guide and Backup and Recovery Guide

2.  Books :
a.  Oracle Database 12c New Features  by Robert G. Freeman  (Oracle Press)
b.  OCP Upgrade to Oracle Database 12c Exam Guide  by Sam R. Alapati  (Oracle Press)
c.  Study Guide for 1Z0-060 Upgrade to Oracle Database 12c  by Matthew Morris (Self-Published as Oracle Certification Prep)

3.  FlashCards
a.  FlashCards with Test Software for 1Z0-060 at https://www.selftestsoftware.com
b.  FlashCards OCPFlash1Z0-060 on Google PlayStore  by Matthew Morris

4.  Practice Tests
a.  Practice Tests and Test Software for 1Z0-060 at https://www.selftestsoftware.com
b.  Oracle Certification Prep Practice Test for 1Z0-060  at http://www.oraclecertificationprep.com

5.  Links to other Resources :  http://www.oraclecertificationprep.com and  www.oracle-base.com


Note : This does NOT mean that I didn't have practice environments.  The books alone aren't sufficient.   I created three practice environments :
a.  Fresh Install of Oracle Linux, RDBMS 12.1.0.1 and creation NonCDB database on FileSystem
b. Donwloaded Oracle Virtual Box PreBuilt VM with 12.1.0.2 and MultiTenant Database
c. Fresh Install of Oracle Linux, creation of disks for ASM, install of Grid Infrastructure and RDBMS 12.1.0.2 and creation of NonCDB database on ASM
.
.
.

Categories: DBA Blogs

Compression -- 5 : OLTP Compression

Sat, 2016-03-26 09:03
Taking the test case from my first Blog Post on compression,  is there any way to support OLTP ?
As demonstrated in the first Blog Post and here, once the table undergoes DML, its size changes.

PDB1@ORCL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name in ('SOURCE_DATA','COMPRESSED_1')
4 /

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
COMPRESSED_1 TABLE 12
SOURCE_DATA TABLE 49

PDB1@ORCL>
PDB1@ORCL> update compressed_1
2 set owner=owner;

364496 rows updated.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_1';

BYTES/1048576
-------------
22

PDB1@ORCL>
PDB1@ORCL> update compressed_1
2 set owner=substr(owner||owner,1,30);

364496 rows updated.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_1';

BYTES/1048576
-------------
58

PDB1@ORCL>

So, I setup another table that supports OLTP compression.  This requires the Advanced Compression licence.  Make sure that you have purchased the licence !

PDB1@ORCL> create table compressed_2 tablespace hemant COMPRESS FOR OLTP
2 as select * from source_data where 1=2;

Table created.

PDB1@ORCL> insert /*+ APPEND */ into compressed_2
2 select * from source_data;

364496 rows created.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_2';

BYTES/1048576
-------------
13

PDB1@ORCL> update compressed_2
2 set owner=owner;

364496 rows updated.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_2';

BYTES/1048576
-------------
13

PDB1@ORCL> update compressed_2
2 set owner=substr(owner||owner,1,30);

364496 rows updated.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_2';

BYTES/1048576
-------------
29

PDB1@ORCL>
PDB1@ORCL> select table_name, pct_free, compression, compress_for
2 from user_tables
3 where table_name like 'COMPRESS%'
4 order by 1;

TABLE_NAME PCT_FREE COMPRESS
------------------------------ ---------- --------
COMPRESS_FOR
------------------------------
COMPRESSED_1 0 ENABLED
BASIC

COMPRESSED_2 10 ENABLED
ADVANCED


PDB1@ORCL>


Note the initial size of COMPRESSED_2 is slightly large because it starts with PCT_FREE=10.
The UPDATE does seem to take longer to run.  COMPRESS FOR OLTP preserves PCT_FREE at 10. But it does handle UPDATEs better than BASIC Compression. However, if you have no subsequent UPDATEs to the data, BASIC Compression (which does not need an additional licence when running Enterprise Edition) will suffice.

I urge you to *TEST* COMPRESS FOR OLTP with the different types of UPDATE operations that you actually have in your live database before you choose to implement it.

In 12c, COMPRESS FOR OLTP is called ROW STORE COMPRESS ADVANCED.  I presented the 11g style COMPRESS FOR OLTP syntax so that you could use it in 11g.
.
.
.


Categories: DBA Blogs

Compression -- 4 : RMAN (BASIC) Compression

Sat, 2016-03-19 06:34
BASIC Compression in RMAN is free with the Enterprise Edition. Advanced Compression with a specified algorithm requires the Advanced Compression Option Licence.

RMAN> backup as compressed backupset database root;

Starting backup at 19-MAR-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=48 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux_9fxmvhl3_.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_system_9fxmx6s1_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs1_9fxn0vgg_.dbf
input datafile file number=00006 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_users_9fxn0t8s_.dbf
channel ORA_DISK_1: starting piece 1 at 19-MAR-16
channel ORA_DISK_1: finished piece 1 at 19-MAR-16
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_03_19/o1_mf_nnndf_TAG20160319T081217_cgtjk2ms_.bkp tag=TAG20160319T081217 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:02:16
Finished backup at 19-MAR-16

Starting Control File and SPFILE Autobackup at 19-MAR-16
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2016_03_19/o1_mf_s_906884075_cgtjofh1_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 19-MAR-16

RMAN> list backup;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 385.78M DISK 00:02:09 19-MAR-16
BP Key: 1 Status: AVAILABLE Compressed: YES Tag: TAG20160319T081217
Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_03_19/o1_mf_nnndf_TAG20160319T081217_cgtjk2ms_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 3034354 19-MAR-16 /u01/app/oracle/oradata/ORCL/datafile/o1_mf_system_9fxmx6s1_.dbf
3 Full 3034354 19-MAR-16 /u01/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux_9fxmvhl3_.dbf
4 Full 3034354 19-MAR-16 /u01/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs1_9fxn0vgg_.dbf
6 Full 3034354 19-MAR-16 /u01/app/oracle/oradata/ORCL/datafile/o1_mf_users_9fxn0t8s_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2 Full 17.20M DISK 00:00:03 19-MAR-16
BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20160319T081435
Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/autobackup/2016_03_19/o1_mf_s_906884075_cgtjofh1_.bkp
SPFILE Included: Modification time: 19-MAR-16
SPFILE db_unique_name: ORCL
Control File Included: Ckp SCN: 3034519 Ckp time: 19-MAR-16

RMAN>
CDB$ROOT@ORCL> l
1 select status, command_id, to_char(start_time,'DD-MON HH24:MI') Start_At, to_char(end_time,'DD-MON HH24:MI') End_At,
2 trunc(input_bytes/1048576) Input_MB, trunc(output_bytes/1048576) Output_MB, input_type
3 from v$rman_backup_job_details
4* order by start_time
CDB$ROOT@ORCL>
CDB$ROOT@ORCL> /

STATUS COMMAND_ID START_AT END_AT INPUT_MB OUTPUT_MB
------------------- ------------------------- ----------------- ------------- ---------- ----------
INPUT_TYPE
-------------
COMPLETED 2016-03-19T08:12:02 19-MAR 08:12 19-MAR 08:14 1807 402
DB FULL


CDB$ROOT@ORCL>

Compare the INPUT_MB and OUTPUT_MB to see the compression achieved.

Default compression can also be configured with

RMAN> show compression algorithm;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default

RMAN>
RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters are successfully stored

RMAN>
RMAN> backup datafile 1;

Starting backup at 19-MAR-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_system_9fxmx6s1_.dbf
channel ORA_DISK_1: starting piece 1 at 19-MAR-16
channel ORA_DISK_1: finished piece 1 at 19-MAR-16
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_03_19/o1_mf_nnndf_TAG20160319T082704_cgtkdrdh_.bkp tag=TAG20160319T082704 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55
Finished backup at 19-MAR-16

Starting Control File and SPFILE Autobackup at 19-MAR-16
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2016_03_19/o1_mf_s_906884879_cgtkgk5d_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 19-MAR-16

RMAN> list backup of datafile 1;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 385.78M DISK 00:02:09 19-MAR-16
BP Key: 1 Status: AVAILABLE Compressed: YES Tag: TAG20160319T081217
Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_03_19/o1_mf_nnndf_TAG20160319T081217_cgtjk2ms_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 3034354 19-MAR-16 /u01/app/oracle/oradata/ORCL/datafile/o1_mf_system_9fxmx6s1_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
3 Full 207.82M DISK 00:00:55 19-MAR-16
BP Key: 3 Status: AVAILABLE Compressed: YES Tag: TAG20160319T082704
Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_03_19/o1_mf_nnndf_TAG20160319T082704_cgtkdrdh_.bkp
List of Datafiles in backup set 3
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 3035302 19-MAR-16 /u01/app/oracle/oradata/ORCL/datafile/o1_mf_system_9fxmx6s1_.dbf

RMAN>
CDB$ROOT@ORCL> l
1 select status, command_id, to_char(start_time,'DD-MON HH24:MI') Start_At, to_char(end_time,'DD-MON HH24:MI') End_At,
2 trunc(input_bytes/1048576) Input_MB, trunc(output_bytes/1048576) Output_MB, input_type
3 from v$rman_backup_job_details
4* order by start_time
CDB$ROOT@ORCL> /

STATUS COMMAND_ID START_AT END_AT INPUT_MB OUTPUT_MB
------------------- ------------------------- ----------------- ----------------- --------- ---------
INPUT_TYPE
-------------
COMPLETED 2016-03-19T08:12:02 19-MAR 08:12 19-MAR 08:14 1807 402
DB FULL

COMPLETED 2016-03-19T08:18:29 19-MAR 08:27 19-MAR 08:28 803 225
DATAFILE FULL


CDB$ROOT@ORCL>
CDB$ROOT@ORCL> !du -sh /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_03_19/o1_mf_nnndf_TAG20160319T082704_cgtkdrdh_.bkp
208M /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_03_19/o1_mf_nnndf_TAG20160319T082704_cgtkdrdh_.bkp

CDB$ROOT@ORCL>


Note the slight discrepancy in reported backuppiece size.  The LIST BACKUP shows it as 207.82MB,  the query on v$rman_backup_job_details dividing the output_bytes by 1048576 shows the size as 225MB (truncating to units) while the OS level "du -sh" command shows it as 208MB.
.
.
.

Categories: DBA Blogs

Compression -- 3 : Index (Key) Compression

Sun, 2016-03-13 03:34
Unlike Table Compression that uses deduplication of column values, Index Compression is based on the keys.  Key Compression is also called Prefix Compression.

This relies on repeated leading key values being eliminated.  Thus, for example, if the leading column of the composite index has frequently repeated values and because an Index is always an organised (sorted) structure, we find the repeated values appearing as if "sequentially".  Key Compression can eliminate the repeated values.

Thus, it becomes obvious that Index Key Compression is usable for
a.  A Composite Index of 2 or more columns
b.  Repeated appearances of values in the *leading* key columns
c.  Compression defined for a maximum of n-1 columns  (where n is the number of columns in the index).  That is, the last column cannot be compressed.
Note that a Non-Unique Index automatically has the ROWID appended to it, so Key Compression can be applied to all the columns defined.

Let's look at a few examples.

Starting with creating a fairly large table (that is a multiplied copy of DBA_OBJECTS)

PDB1@ORCL> create table target_data as select * from source_data where 1=2;

Table created.

PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

364496 rows created.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

364496 rows created.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

364496 rows created.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL>
PDB1@ORCL> desc target_data
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER VARCHAR2(128)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(128)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(23)
CREATED DATE
LAST_DDL_TIME DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
NAMESPACE NUMBER
EDITION_NAME VARCHAR2(128)
SHARING VARCHAR2(13)
EDITIONABLE VARCHAR2(1)
ORACLE_MAINTAINED VARCHAR2(1)

PDB1@ORCL>


What composite index is a good candidate for Key Compression ?
*Not* an Index that begins with OBJECT_ID as that is a Unique value.

Let's compare two indexes (compressed and non-compressed) on (OWNER, OBJECT_TYPE, OBJECT_NAME).

PDB1@ORCL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 2;

Index created.

PDB1@ORCL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

PDB1@ORCL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /

LEAF_BLOCKS
-----------
5629

PDB1@ORCL>


PDB1@ORCL> drop index target_data_ndx_1_comp
2 /

Index dropped.

PDB1@ORCL> create index target_data_ndx_2_nocomp on
2 target_data (owner, object_type, object_name) ;

Index created.

PDB1@ORCL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_2_NOCOMP');

PL/SQL procedure successfully completed.

PDB1@ORCL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_2_NOCOMP'
4 /

LEAF_BLOCKS
-----------
7608

PDB1@ORCL>


Note the "compress 2" specification for the first index.  That is an instruction to compress based on the leading 2 columns.
Thus, the compressed index is 5,629 blocks but the normal, non-compressed index is 7,608 blocks.  We make a gain of 26% in the index size.

Why did I choose OWNER, OBJECT_TYPE as the leading columns ?  Because I expected a high level of repetition on these column names.


Note : I have not explored Advanced Index Compression available in 12.1.0.2
.
.

Categories: DBA Blogs

COMPRESSION -- 2 : Compressed Table Partitions

Sun, 2016-03-06 09:38
A Partitioned Table can choose to have a mix of COMPRESS and NOCOMPRESS Partitions.

As in this case where the first 3 partitions are defined as COMPRESS and the last as NOCOMPRESS :  :

[oracle@localhost Hemant]$ sqlplus hemant/hemant@orcl

SQL*Plus: Release 12.1.0.2.0 Production on Sun Mar 6 23:20:19 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sun Mar 06 2016 23:19:11 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> @create_SALESHIST
SQL> spool create_SALESHIST
SQL>
SQL> drop table SALESHIST;

Table dropped.

SQL>
SQL> alter session set "_partition_large_extents"=FALSE;

Session altered.

SQL>
SQL> create table SALESHIST
2 (
3 PROD_ID NUMBER NOT NULL ,
4 CUST_ID NUMBER NOT NULL ,
5 TIME_ID DATE NOT NULL ,
6 CHANNEL_ID NUMBER NOT NULL ,
7 PROMO_ID NUMBER NOT NULL ,
8 QUANTITY_SOLD NUMBER(10,2) NOT NULL ,
9 AMOUNT_SOLD NUMBER(10,2) NOT NULL
10 )
11 NOCOMPRESS LOGGING
12 TABLESPACE USERS
13 PARTITION BY RANGE (TIME_ID)
14 (
15 PARTITION SALES_1998
16 VALUES LESS THAN (TO_DATE('1999-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
17 'NLS_CALENDAR=GREGORIAN'))
18 SEGMENT CREATION IMMEDIATE
19 COMPRESS BASIC NOLOGGING
20 TABLESPACE SALES_1998 ,
21 PARTITION SALES_1999
22 VALUES LESS THAN (TO_DATE('2000-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
23 'NLS_CALENDAR=GREGORIAN'))
24 SEGMENT CREATION IMMEDIATE
25 COMPRESS BASIC NOLOGGING
26 TABLESPACE SALES_1999 ,
27 PARTITION SALES_2000
28 VALUES LESS THAN (TO_DATE('2001-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
29 'NLS_CALENDAR=GREGORIAN'))
30 SEGMENT CREATION IMMEDIATE
31 COMPRESS BASIC NOLOGGING
32 TABLESPACE SALES_2000 ,
33 PARTITION SALES_2001
34 VALUES LESS THAN (TO_DATE('2002-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
35 'NLS_CALENDAR=GREGORIAN'))
36 SEGMENT CREATION IMMEDIATE
37 NOCOMPRESS NOLOGGING
38 TABLESPACE SALES_2001 )
39 /

Table created.

SQL>
SQL> spool off
SQL>
SQL> col partition_name format a30
SQL> select partition_name, compression, compress_for
2 from user_tab_partitions
3 where table_name = 'SALESHIST'
4 order by partition_position;

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------------------------
SALES_1998 ENABLED BASIC
SALES_1999 ENABLED BASIC
SALES_2000 ENABLED BASIC
SALES_2001 DISABLED

SQL>
SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'SALESHIST';

COMPRESS COMPRESS_FOR
-------- ------------------------------


SQL>


The Compression attributes actually apply to the Segments so the Partition Segments have the definition but the Table, being segmentless, does not show the definition.

Note that I am still demonstrating BASIC Compression. So the compression is applied only on Direct Path INSERT.  As in :

SQL> insert /*+ APPEND */ into saleshist select * from sh.sales;

918843 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select partition_name, tablespace_name, bytes/1024
2 from user_segments
3 where segment_name = 'SALESHIST'
4 and segment_type = 'TABLE PARTITION'
5 order by 1;

PARTITION_NAME TABLESPACE_NAME BYTES/1024
------------------------------ ------------------------------ ----------
SALES_1998 SALES_1998 3072
SALES_1999 SALES_1999 4096
SALES_2000 SALES_2000 4096
SALES_2001 SALES_2001 11264

SQL>


This demonstrates that it is possible to
a) Have some Partitions defined as COMPRESSed and others as NOCOMPRESSed
b) Create different Partitions in different Tablespaces

Before the advent of (and still without using) the 12c ILM features, this was and is a method to manage historical data with compression and moving or placing data (i.e. the respective tablespace datafile(s)) in low-cost storage as desired.

Note : For the purposes of this demo, I used the parameter "_partition_large_extents"=FALSE.  Oracle's default Partition size since 11.2.0.2 has been 8MB for Table Partitions and that would have been excessively large for this demo.  If you are using Hybrid Columnar Compression and/or Exadata, Oracle advises against this.
.
.
.

Categories: DBA Blogs