Skip navigation.

DBA Blogs

Mind your rdbms/audit on 12c

Pythian Group - Tue, 2016-04-05 09:17

Recently we’ve ran into an interesting question from one of our clients. They were seeing messages of the following form in syslog:


"Aug 11 11:56:02 ***** kernel: EXT3-fs warning (device ******): ext3_dx_add_entry: Directory index full!"

I haven’t encountered this before, and did a bit of research. My initial suspicion ended up being correct, and it was due to too many files being created, somewhere in that file system. I had a look around, and eventually checked out the ORACLE_HOME of the ASM / Grid Infrastructure software, which is running version 12.1.0.2 on that host.

I snooped around using du -sh to check which directories or sub-directories might be the culprit, and the disk usage utility came to a halt after the “racg” directory. Next in line would be “rdbms”. The bulb lit up somewhat brighter now. Entering the rdbms/audit directory, I issued the common command you would if you wanted to look at a directories contents: “ls”.

Five minutes later, there was still no output on my screen. Okay, we found the troublemaker. So we’re now being faced with a directory that has potentially millions of files in it. Certainly we all are aware that “rm” isn’t really able to cope with a situation like this. It would probably run for a couple minutes until it’s done parsing the directory index, and then yell “argument list too long” at us. Alternatively, we could use find, combined with -exec (bad idea), -delete, or even pipe into rm using xargs. Looking around a bit on the good ol’ friend google, I came across this very interesting blog post by Sarath Pillai.

I took his PERL one-liner, adjusted it a wee bit since I was curious how many files we actually got in there and ran it on a sandbox system with a directory with 88’000 files in it:


perl -e 'my $i=0;for(<*>){$i++;((stat)[9]<(unlink))} print "Files deleted: $i\n"'

It completed in 4.5 seconds. That’s pretty good. In Sarath’s tests he was able to delete half a million files in roughly a minute. Fair enough.

After getting the OK from the client, we ran it on the big beast. It took 10 minutes.


Files deleted: 9129797

9.1 million files. Now here comes the interesting bit. This system has been actively using 12.1.0.2 ASM since May 6th, 2015. That’s only 3 months. That translates to 3 million files per month. Is this really a desirable feature? Do we need to start running Hadoop just to be able to mine the data in there?

Looking at some of the files, it seems ASM is not only logging user interactions there, but also anything and everything done by any process that connects to ASM.

As I was writing this, I happened to take another peek at the directory.


[oracle@cc1v3 audit]$ ls -1 | wc -l
9134657

Remember those numbers from before? Three million a month? Double that.

I suspect this was due to the index being full, and Linux has now re-populated the index with the next batch. Until it ran full again.

A new syslog entry just created at the same time seems to confirm that theory:

Aug 12 00:09:11 ***** kernel: EXT3-fs warning (device ******): ext3_dx_add_entry: Directory index full!

After running the PERL one-liner again, we deleted another vast amount of files:


Files deleted: 9135386

It seems that the root cause is the added time stamp to the file names of the audit files that Oracle writes in 12.1. The file names are much more unique, which gives Oracle the opportunity to generate so many more of them. Where in previous versions, with an adequately sized file system you’d probably be okay for a year or more; on 12.1.0.2, on an active database (and our big beast is very active) you have to schedule a job to remove them, and ensure it runs frequently (think 18+ million files in 3 months to put “frequently” into perspective).

Categories: DBA Blogs

Enabling Large Pages on Oracle Database 11g running on IBM AIX

Pythian Group - Mon, 2016-04-04 17:30

For implementing Large Pages on AIX first you will need to choose large page size at OS level.
On AIX you can have multiple large page sizes of 4KB, 64KB, 16MB, and 16GB.

In this example we will be using a large page size of 16MB.

Steps for implemenatation:

1- Based on MOS Doc ID 372157.1 first you need to enable Large Pages at OS level

# vmo -r  -o lgpg_size=16777216 -o lgpg_regions=<Total number of pages>
# vmo -o lru_file_repage=0
# vmo -p -o v_pinshm=1
# lsuser -a capabilities oracle
# chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle
# bosboot -a

This needs a server reboot

For complete instruction please review note ID 372157.1

2- Setting parameters at instance level

On AIX databases you only need to set LOCK_SGA to TRUE:

alter system set lock_sga=TRUE scope=spfile;

Note: On AIX databases, USE_LARGE_PAGES parameter has NO impact.
These parameters are only valid for databases running on Linux, the value of this parameter even if set to FALSE will be ignored on AIX.

By default when Large Pages is available on AIX it will be used by database instances regardless of USE_LARGE_PAGES parameter value. You only need to set LOCK_SGA.

3- Restart the instance and confirm Large Pages is in use:

After setting lock_sga instance must be restarted.
As I explained above, when Large Pages is available at OS level it will be used by instance, but the key point in here is how to confirm whether Large Pages is in use or not.

How to check if Huge Pages is used by Oracle instance.

For Oracle 11g running on AIX, no informational message is written to the alert log as what we see in the alert log of databases running on Linux.

So for your database instance running on AIX do NOT expect following lines in the alert log:

****************** Large Pages Information *****************
Total Shared Global Region in Large Pages = xx MB (100%)
Large Pages used by this instance: xxx (xxx MB)
Large Pages unused system wide = x (xxx MB) (alloc incr 4096 KB)
Large Pages configured system wide = xxx (xxx MB)
Large Page size = 16 MB
***********************************************************

The only way you can make sure large pages is being used by instance is checking memory usage at OS level:

Consider SGA_TARGET in your instance is 8G
Total number of Large Pages (with size of 16M) will be 8G/16M + 1 which is : 8589934592 / 16777216 + 1 = 513

Check the number of large 16M pages in use at OS level before starting your instance:

$ vmstat -P all

System configuration: mem=98304MB

pgsz            memory                           page
----- -------------------------- ------------------------------------
           siz      avm      fre    re    pi    po    fr     sr    cy
   4K  4420992  2926616   487687     0     0     5  1280   2756     0
  64K   582056   581916      754     0     0     0     0      0     0
  16M     2791       87     2704     0     0     0     0      0     0

In this example number of 16M pages in use before starting instance is 87 pages from total available of 2791 pages.

We start the instance with SGA size of 8G:

SQL> startup
ORACLE instance started.

Total System Global Area 8551575552 bytes
Fixed Size                  2238616 bytes
Variable Size            2348812136 bytes
Database Buffers         6190792704 bytes
Redo Buffers                9732096 bytes
Database mounted.
Database opened.
SQL> show parameter sga_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_target                           big integer 8G

SQL> show parameter lock_sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_sga                             boolean     TRUE

Then we check Large pages in use again :

$ vmstat -P all

System configuration: mem=98304MB

pgsz            memory                           page
----- -------------------------- ------------------------------------
           siz      avm      fre    re    pi    po    fr     sr    cy
   4K  4428160  2877041   420004     0     0     5  1279   2754     0
  64K   581608   530522    51695     0     0     0     0      0     0
  16M     2791      600     2191     0     0     0     0      0     0

As you can see the total number of 16M pages in use is now 600 pages, which is exactly 513 pages more than what it was before instance startup.
This proves that 16M pages have been used by our instance.

You can also check memory usage of your instance by checking one of the instance processes like pmon:

$ ps -ef|grep pmon
  oracle 14024886 31392176   0 14:05:34  pts/0  0:00 grep pmon
  oracle 41681022        1   0   Mar 11      -  3:12 ora_pmon_KBS

Then check memory used by this process is from 16M Pages:

$ svmon -P 41681022

-------------------------------------------------------------------------------
     Pid Command          Inuse      Pin     Pgsp  Virtual 64-bit Mthrd  16MB
41681022 oracle         2180412  2109504     1778  2158599      Y     N     Y

     PageSize                Inuse        Pin       Pgsp    Virtual
     s    4 KB               31820          0       1650       9975
     m   64 KB                2959        516          8       2961
     L   16 MB                 513        513          0        513

I hope this will be useful for you, and good luck.

Categories: DBA Blogs

More Fun With Oracle Timestamp Math

Pythian Group - Mon, 2016-04-04 14:23
Timestamp Math

Several years ago I wrote an article on Oracle date math.
Amazingly, that article was still available online at the time of this writing.

Working With Oracle Dates

An update to that article is long overdue.
While date math with the DATE data type is fairly well known and straight forward, date math with Oracle TIMESTAMP data is less well known and somewhat more difficult.

Data Types and Functions

Let’s begin by enumerating the data types and functions that will be discussed

Datetime and Interval Data Types

Documentation for Datetime and Interval Data Types

  • Timestamp
  • Timestamp with Time Zone
  • Interval Day to Second
  • Interval Year to Month
Datetime Literals

Documentation for Datetime Literals

  • Date
  • Timestamp
  • Timestamp with Time Zone
  • Timestamp with Local Time Zone
Interval Literals

Documentation for Interval Literals

  • Interval Day to Second
  • Interval Year to Month
Datetime/Interval Arithmetic

Documentation for Datetime/Interval Arithmetic

There is not a link to the heading, just scroll down the page until you find this.

Timestamp Functions

Documentation for Datetime Functions

There quite a few of these available. Most readers will already be familiar with many of these, and so only some of the more interesting functions related to timestamps will be covered.

  • extract
  • to_dsinterval
  • to_yminterval
Timestamp Internals

It is always interesting to have some idea of how different bits of technology work. In Working With Oracle Dates I showed how Date values are stored in the database, as well as how a Date stored in the database is somewhat different than a date variable.

Let’s start by storing some data in a timestamp column and comparing how it differs from systimestamp.

Test table for Timestamp Math Blog:

col c1_dump format a70
col c1 format a35
col funcname format a15

set linesize 200 trimspool on
set pagesize 60

drop table timestamp_test purge;

create table timestamp_test (
c1 timestamp
)
/

insert into timestamp_test values(systimestamp );
insert into timestamp_test values(systimestamp - 366);
commit;

select
'timestamp' funcname, c1, dump(c1) c1_dump
from timestamp_test
union all
select
'systimestamp' funcname, systimestamp, dump(systimestamp) systimestamp_dump
from dual
/

FUNCNAME        C1                                  C1_DUMP
--------------- ----------------------------------- ----------------------------------------------------------------------
timestamp       26-MAR-16 03.09.27.649491 PM -07:00 Typ=180 Len=11: 120,116,3,26,16,10,28,38,182,114,56
timestamp       26-MAR-15 03.09.27.000000 PM -07:00 Typ=180 Len=7: 120,115,3,26,16,10,28
systimestamp    26-MAR-16 03.09.27.687416 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,9,27,0,192,34,249,40,252,0,5,0,0,0,0,0

3 rows selected.

One of the first things you might notice is that the value for Typ is 180 for TIMESTAMP columns, but for SYSTIMESTAMP Typ=188.
The difference is due to TIMESTAMP being an internal data type as stored in the database, while SYSTIMESTAMP is dependent on the compiler used to create the executables.

Another difference is the length; the first TIMESTAMP column has a length of 11, whereas the SYSTIMESTAMP column’s length is 20. And what about that second TIMESTAMP column? Why is the length only 7?

TIMESTAMP with length of 7

An example will show why the second row inserted into TIMESTAMP_TEST has a length of only 7.

  1* select dump(systimestamp) t1, dump(systimestamp-1) t2, dump(sysdate) t3 from dual
15:34:32 ora12c102rac01.jks.com - jkstill@js122a1 SQL- /

T1                                       T2                                       T3
---------------------------------------- ---------------------------------------- ----------------------------------------
Typ=188 Len=20: 224,7,3,22,22,34,35,0,16 Typ=13 Len=8: 224,7,3,21,18,34,35,0      Typ=13 Len=8: 224,7,3,22,18,34,35,0
0,181,162,17,252,0,5,0,0,0,0,0


1 row selected.

T2 was implicitly converted to the same data type as SYSDATE because standard date math was performed on it.

The same thing happened when the second row was inserted TIMESTAMP_TEST.

Oracle implicitly converted the data to a DATE data type, and then implicitly converted it again back to a timestamp, only the standard date information is available following the previous implicit conversion.

You may have noticed that in this example the length of the data is 8, while that stored in the table was 7. This is due to the use of SYSDATE, which is an external data type, whereas any data of DATE data type that is stored in the database is using an internal data type which always has a length of 7.

SYSTIMESTAMP Byte Values

Let’s see if we can determine how each byte is used in a SYSTIMESTAMP value

The following SQL will use the current time as a baseline, and start a point 10 seconds previous, showing the timestamp value and the internal representation.

 

col t1 format a35
col t2 format a38
col dump_t1 format a70
col dump_t2 format a70

set linesize 250 trimspool on

/*
 using to_disinterval() allows performing timestamp math without implicit conversions

see https://en.wikipedia.org/wiki/ISO_8601
 for an explanation of the PTnS notation being used in to_dsinterval()

*/

alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';

-- subtract 1 second from the current date
-- do it 10 times
select
 --systimestamp t1,
 --dump(systimestamp) dump_t1,
 systimestamp - to_dsinterval('PT' || to_char(level) || 'S') t2,
 dump(systimestamp - to_dsinterval('PT' || to_char(level) || 'S')) dump_t2
from dual connect by level <= 10
order by level desc
/


T2                                     DUMP_T2
-------------------------------------- ----------------------------------------------------------------------
26-MAR-16 03.34.55.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,55,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.34.56.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,56,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.34.57.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,57,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.34.58.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,58,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.34.59.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,59,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.00.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,0,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.01.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,1,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.02.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,2,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.03.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,3,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.04.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,4,0,152,108,205,20,252,0,5,0,0,0,0,0

10 rows selected.

 

From the output we can see that seconds are numbered 0-59, and that the 7th byte in the internal format is where the second is stored. We can also see that the Month is represented by the 3rd bytes, and the Day by the fourth 4th byte.

One would then logically expect the 5th bite to show us the hour. Glancing at the actual time of 3:00 PM it seems curious then the value we expect to be the hour is 19 rather than 15.

The server where these queries is being run has a Time Zone of EDT. Next I ran the same queries on a server with a TZ of PDT, and though the time in the timestamp appeared as 3 hours earlier, the value stored in the 5th byte is still 19. Oracle is storing the hour in UTC time, then using the TZ from the server to get the actual time.

Playing with Time Zones

We can modify the local session time zone to find out how Oracle is calculating the times.

The first attempt is made on the remote client where scripts for this article are developed. The TZ will be set for Ethiopia and then the time checked at the Linux command line and in Oracle.

 

# date
Sat Mar 26 13:16:12 PDT 2016

# TZ='Africa/Addis_Ababa'; export TZ

# date
Sat Mar 26 23:16:17 EAT 2016

# sqlplus jkstill/XXX@p1
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
SQL- !date
Sat Mar 26 23:16:40 EAT 2016

SQL-  l
 1 select
 2 systimestamp t1,
 3 dump(systimestamp) dump_t1
 4* from dual
23:16:50 ora12c102rac01.jks.com - jkstill@js122a1 
SQL- /
T1                                  DUMP_T1
----------------------------------- ----------------------------------------------------------------------
26-MAR-16 04.16.56.254769 PM -04:00 Typ=188 Len=20: 224,7,3,26,20,16,56,0,104,119,47,15,252,0,5,0,0,0,0,0

Setting the TZ on the client clearly has no effect on the time returned from Oracle. Now let’s try while logged on to the database server.

$ date
Sat Mar 26 16:20:23 EDT 2016

$ TZ='Africa/Addis_Ababa'; export TZ

$ date
Sat Mar 26 23:20:38 EAT 2016

$ sqlplus / as sysdba
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL- l
 1 select
 2 systimestamp t1,
 3 dump(systimestamp) dump_t1
 4* from dual
SQL- /

T1                                  DUMP_T1
----------------------------------- ----------------------------------------------------------------------
26-MAR-16 11.22.48.473298 PM +03:00 Typ=188 Len=20: 224,7,3,26,20,22,48,0,80,244,53,28,3,0,5,0,0,0,0,0

 

This experiment has demonstrated two things for us:

  1. Oracle is storing the hour as UTC time
  2. Setting the TZ on the client does not have any affect on the calculations of the time.
What About the Year?

Given the location of the month, it would be expected to find the year in the byte just previous the month byte. There is not just one byte before the month, but two. You will recall that SYSTIMESTAMP has a different internal representation than does the TIMESTAMP data type. Oracle is using both of these bytes to store the year.

Working with this timestamp from an earlier example, we can use the information in Oracle Support Note 69028.1 to see how this works.

 

T2                                     DUMP_T2
-------------------------------------- ----------------------------------------------------------------------
26-MAR-16 03.34.55.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,55,0,152,108,205,20,252,0,5,0,0,0,0,0

 

For the timestamp of March 16 2016 the first two bytes of the timestamp are used to represent the year.

The Formula for AD dates is Byte 1 + ( Byte 2 * 256 ). Using this formula the year 2016 can be arrived at:

224 + ( 7 * 256) = 2016

For the TIMESTAMP data type, the format is somewhat different for the year; actually it works the same way it does for the DATE data type, in excess 100 notation.

 

SQL- l
1* select c1, dump(c1) dump_c1 from timestamp_test where rownum < 2
SQL- /

C1 DUMP_C1
------------------------------ ---------------------------------------------------
26-MAR-16 03.09.27.649491 PM Typ=180 Len=11: 120,116,3,26,16,10,28,38,182,114,56

 

The 2nd byte indicates the current year – 1900.

Decode All Timestamp Components

Now let’s decode all of the data in a TIMESTAMP. First we need some some TIMESTAMP test data

Creating the test data

The following SQL will provide some test data for following experiments.

We may not use all of the columns or rows, but they are available.

 

drop table timestamp_tz_test purge;

create table timestamp_tz_test (
 id integer,
 c1 timestamp,
 c2 timestamp with time zone,
 c3 timestamp with local time zone
)
/

-- create 10 rows each on second apart

begin
for i in 1..10
loop
 insert into timestamp_tz_test values(i,systimestamp,systimestamp,systimestamp );
 dbms_lock.sleep(1);
 null;
end loop;
commit;
end;
/

 

We already know that TIMESTAMP data can store fractional seconds to a billionth of a second.

Should you want to prove that to yourself, the following bit of SQL can be used to insert TIMESTAMP data into a table, with each row being 1E-9 seconds later than the previous row. This will be left as an exercise for the reader.

 

create table t2 as
select level id,
 to_timestamp('2016-03-29 14:25:42.5' || lpad(to_char(level),8,'0'),'yyyy-mm-dd hh24.mi.ssxff') c1
from dual
connect by level <= 1000
/

col dump_t1 format a70
col c1 format a35
col id format 99999999

select id, c1, substr(dump(c1),instr(dump(c1),',',-1,4)+1) dump_t1
from t2
order by id
/

 

Oracle uses 4 bytes at the end of a timestamp to store the fractional seconds.

The value of the least byte is as shown.

Each greater byte will be a power of 256.

The following SQL will make this more clear. Don’t spend too much time at first trying to understand the SQL, at it will become more clear after you see the results.

SQL to decode 1 row of TIMESTAMP data.

 

col id format 99
col t1 format a35
col dumpdata format a50
col tz_type format a10
col ts_component format a40
col label format a6
col real_value format a50

set linesize 200 trimspool on

alter session set nls_timestamp_format = 'yyyy-mm-dd hh24.mi.ssxff';
alter session set nls_timestamp_tz_format = 'yyyy-mm-dd hh24.mi.ssxff tzr';


with rawdata as (
 select c2 t1, dump(c2) dump_t1
 from timestamp_tz_test
 where id = 1
),
datedump as (
 select t1,
 substr(dump_t1,instr(dump_t1,' ',1,2)+1) dumpdata
 from rawdata
),
-- regex from http://nuijten.blogspot.com/2009/07/splitting-comma-delimited-string-regexp.html
datebits as (
 select level id, regexp_substr (dumpdata, '[^,]+', 1, rownum) ts_component
 from datedump
 connect by level <= length (regexp_replace (dumpdata, '[^,]+')) + 1
),
labeldata as (
 select 'TS,DU,CC,YY,MM,DD,HH,MI,SS,P1,P2,P3,P4' rawlabel from dual
),
labels as (
 select level-2 id, regexp_substr (rawlabel, '[^,]+', 1, rownum) label
 from labeldata
 connect by level <= length (regexp_replace (rawlabel, '[^,]+')) + 1
),
data as (
 select db.id, db.ts_component
 from datebits db
 union
 select 0, dumpdata
 from datedump dd
 union select -1, to_char(t1) from rawdata
)
select d.id, l.label, d.ts_component,
 case l.label
 when 'DU' then d.ts_component
 when 'CC' then 'Excess 100 - Real Value: ' || to_char(to_number((d.ts_component - 100)*100 ))
 when 'YY' then 'Excess 100 - Real Value: ' || to_char(to_number(d.ts_component - 100 ))
 when 'MM' then 'Real Value: ' || d.ts_component
 when 'DD' then 'Real Value: ' || d.ts_component
 when 'HH' then 'Excess 1 - Real Value: ' || to_char(to_number(d.ts_component)-1)
 when 'MI' then 'Excess 1 - Real Value: ' || to_char(to_number(d.ts_component)-1)
 when 'SS' then 'Excess 1 - Real Value: ' || to_char(to_number(d.ts_component)-1)
 when 'P1' then 'Fractional Second P1 : ' || to_char((to_number(d.ts_component) * POWER(256,3) ) / power(10,9))
 when 'P2' then 'Fractional Second P2 : ' || to_char((to_number(d.ts_component) * POWER(256,2) ) / power(10,9))
 when 'P3' then 'Fractional Second P3 : ' || to_char((to_number(d.ts_component) * 256 ) / power(10,9))
 when 'P4' then 'Fractional Second P4 : ' || to_char((to_number(d.ts_component) + 256 ) / power(10,9))
 end real_value
from data d
join labels l on l.id = d.id
order by 1
/

When the values for the Pn fractional second columns are added up, they will be equal to the (rounded) value shown in the timestamp.

 

 ID LABEL  TS_COMPONENT                             REAL_VALUE
--- ------ ---------------------------------------- --------------------------------------------------
 -1 TS     2016-03-31 09.14.29.488265 -07:00
  0 DU     120,116,3,31,17,15,30,29,26,85,40,13,60  120,116,3,31,17,15,30,29,26,85,40,13,60
  1 CC     120                                      Excess 100 - Real Value: 2000
  2 YY     116                                      Excess 100 - Real Value: 16
  3 MM     3                                        Real Value: 3
  4 DD     31                                       Real Value: 31
  5 HH     17                                       Excess 1 - Real Value: 16
  6 MI     15                                       Excess 1 - Real Value: 14
  7 SS     30                                       Excess 1 - Real Value: 29
  8 P1     29                                       Fractional Second P1 : .486539264
  9 P2     26                                       Fractional Second P2 : .001703936
 10 P3     85                                       Fractional Second P3 : .00002176
 11 P4     40                                       Fractional Second P4 : .000000296

13 rows selected.

Timezones are recorded in an additional two bytes in TIMESTAMP WITH TIMEZONE and TIMEAZONE WITH LOCAL TIMEZONE data types.

Decoding those two bytes is left as an exercise for the reader.

Timestamp Arithmetic

Now that we have had some fun exploring and understanding how Oracle stores TIMESTAMP data, it is time to see how calculations can be performed on timestamps.

Note: See this ISO 8601 Article to understand the notation being used in to_dsinterval().

Interval Day to Second

It is a common occurrence to add or subtract time to or from Oracle Dates.

How that is done with the Oracle DATE data type is fairly well known.

  • Add 1 Day
    • DATE + 1
  • Add 1 Hour
    • DATE + (1/24)
  • Add 1 Minute
    • DATE + ( 1 / 1440)
  • Add 1 Second
    • DATE + (1/86400)

Following is a brief refresher on that topic:

 

alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';

select sysdate today, sysdate -1 yesterday from dual;

select sysdate now, sysdate - (30/86400) "30_Seconds_Ago" from dual;

select sysdate now, sysdate + ( 1/24 ) + ( 15/1440 ) + ( 42/86400) "1:15:42_Later" from dual;

SQL- @date-calc

Session altered.

TODAY YESTERDAY
------------------- -------------------
2016-03-30 13:39:06 2016-03-29 13:39:06
NOW 30_Seconds_Ago
------------------- -------------------
2016-03-30 13:39:06 2016-03-30 13:38:36

NOW 1:15:42_Later
------------------- -------------------
2016-03-30 13:39:06 2016-03-30 14:54:48

 

While this same method will work with timestamps, the results may not be what you expect. As noted earlier Oracle will perform an implicit conversion to a DATE data type, resulting in truncation of some timestamp data. The next example makes it clear that implicit conversions have converted TIMESTAMP to a DATA type.

alter session set nls_timestamp_format = 'yyyy-mm-dd hh24.mi.ssxff';
alter session set nls_date_format = 'DD-MON-YY';

select systimestamp today, systimestamp -1 yesterday from dual;

select systimestamp now, systimestamp - (30/86400) "30_Seconds_Ago" from dual;

select systimestamp now, systimestamp + ( 1/24 ) + ( 15/1440 ) + ( 42/86400) "1:15:42_Later" from dual;

SQL- @timestamp-calc-incorrect

TODAY                                                                       YESTERDAY
--------------------------------------------------------------------------- ---------
2016-03-31 11.35.29.591223 -04:00                                           30-MAR-16

NOW                                                                         30_Second
--------------------------------------------------------------------------- ---------
2016-03-31 11.35.29.592304 -04:00                                           31-MAR-16

NOW                                                                         1:15:42_L
--------------------------------------------------------------------------- ---------
2016-03-31 11.35.29.592996 -04:00                                           31-MAR-16

 

Oracle has supplied functions to properly perform calculations on timestamps. The previous example will work properly when ds_tointerval is used as seen in the next example.

 

col c30 head '30_Seconds_Ago' format a38
col clater head '1:15:42_Later' format a38
col now format a35
col today format a35
col yesterday format a38

alter session set nls_timestamp_format = 'yyyy-mm-dd hh24.mi.ssxff';
alter session set nls_timestamp_tz_format = 'yyyy-mm-dd hh24.mi.ssxff tzr';


-- alternate methods to subtract 1 day
select systimestamp today, systimestamp - to_dsinterval('P1D') yesterday from dual;
select systimestamp today, systimestamp - to_dsinterval('1 00:00:00') yesterday from dual;

-- alternate methods to subtract 30 seconds
select systimestamp now, systimestamp - to_dsinterval('PT30S') c30 from dual;
select systimestamp now, systimestamp - to_dsinterval('0 00:00:30') c30 from dual;

-- alternate methods to add 1 hour, 15 minutes and 42 seconds
select systimestamp now, systimestamp + to_dsinterval('PT1H15M42S') clater from dual;
select systimestamp now, systimestamp + to_dsinterval('0 01:15:42') clater from dual;

TODAY                               YESTERDAY
----------------------------------- --------------------------------------
2016-03-30 18.10.41.613813 -04:00 2016-03-29 18.10.41.613813000 -04:00

TODAY                               YESTERDAY
----------------------------------- --------------------------------------
2016-03-30 18.10.41.614480 -04:00 2016-03-29 18.10.41.614480000 -04:00

NOW                                 30_Seconds_Ago
----------------------------------- --------------------------------------
2016-03-30 18.10.41.615267 -04:00 2016-03-30 18.10.11.615267000 -04:00

NOW                                 30_Seconds_Ago
----------------------------------- --------------------------------------
2016-03-30 18.10.41.615820 -04:00 2016-03-30 18.10.11.615820000 -04:00

NOW                                 1:15:42_Later
----------------------------------- --------------------------------------
2016-03-30 18.10.41.616538 -04:00 2016-03-30 19.26.23.616538000 -04:00

NOW                                 1:15:42_Later
----------------------------------- --------------------------------------
2016-03-30 18.10.41.617161 -04:00 2016-03-30 19.26.23.617161000 -04:00

 

Extract Values from Timestamps

The values for years, months, days, hours and seconds can all be extracted from a timestamp via the extract function. The following code demonstrates a few uses of this, along with examples of retrieving intervals from two dates.

The values in parentheses for the day() and year() intervals specify the numeric precision to be returned.

 

def nls_tf='yyyy-mm-dd hh24.mi.ssxff'

alter session set nls_timestamp_format = '&nls_tf';

col d1_day format 999999
col full_interval format a30
col year_month_interval format a10

with dates as (
   select
      to_timestamp_tz('2014-06-19 14:24:29.373872', '&nls_tf') d1
      , to_timestamp_tz('2016-03-31 09:42:16.8734921', '&nls_tf') d2
   from dual
)
select
   extract(day from d1) d1_day
   , ( d2 - d1) day(4) to second full_interval
   , ( d2 - d1) year(3) to month year_month_interval
   , extract( day from d2 - d1) days_diff
   , extract( hour from d2 - d1) hours_diff
   , extract( minute from d2 - d1) minutes_diff
   , extract( second from d2 - d1) seconds_diff
from dates
/


 D1_DAY FULL_INTERVAL                  YEAR_MONTH  DAYS_DIFF HOURS_DIFF MINUTES_DIFF SECONDS_DIFF
------- ------------------------------ ---------- ---------- ---------- ------------ ------------
     19 +0650 19:17:47.499620          +001-09           650         19           17   47.4996201

Building on that, the following example demonstrates how the interval value the represents the difference between dates d1 and d2 can be added back to d1 and yield a date with the same value as d1.

 

def nls_tf='yyyy-mm-dd hh24.mi.ssxff'

alter session set nls_timestamp_format = '&nls_tf';

col d1 format a30
col d2 format a30
col full_interval format a30
col calc_date format a30

with dates as (
   select
      to_timestamp('2014-06-19 14:24:29.373872', '&nls_tf') d1
      , to_timestamp('2016-03-31 09:42:16.873492', '&nls_tf') d2
   from dual
)
select
   d1,d2
   , ( d2 - d1) day(4) to second  full_interval
   , d1 + ( d2 - d1) day(4) to second calc_date
from dates
/


D1                             D2                             FULL_INTERVAL                  CALC_DATE
------------------------------ ------------------------------ ------------------------------ ------------------------------
2014-06-19 14.24.29.373872000  2016-03-31 09.42.16.873492000  +0650 19:17:47.499620          2016-03-31 09.42.16.873492000

 

PL/SQL Interval Data Types

 

The ISO 8601 Article previously mentioned will be useful for understanding how time durations may be specified with interval functions.

The following combination of SQL and PL/SQL is used to convert the difference between two timestamps into seconds. The code is incomplete in the sense that the assumption is made that the largest component of the INTERVAL is hours. In the use case for this code that is true, however there could also be days, months and years for larger value of the INTERVAL.

The following code is sampled from the script ash-waits-use.sql and uses PL/SQL to demonstrate the use of the INTERVAL DAY TO SECOND data type in PL/SQL.

 

var v_wall_seconds number
col wall_seconds new_value wall_seconds noprint

declare
	ash_interval interval day to second;
begin

	select max(sample_time) - min(sample_time) into ash_interval from v$active_session_history;


	select
		max(sample_time) - min(sample_time) into ash_interval
	from v$active_session_history
	where sample_time 
	between
		decode('&&snap_begin_time',
			'BEGIN',
			to_timestamp('1900-01-01 00:01','yyyy-mm-dd hh24:mi'),
			to_timestamp('&&snap_begin_time','yyyy-mm-dd hh24:mi')
		)
		AND
		decode('&&snap_end_time',
			'END',
			to_timestamp('4000-12-31 23:59','yyyy-mm-dd hh24:mi'),
			to_timestamp('&&snap_end_time','yyyy-mm-dd hh24:mi')
		);

	:v_wall_seconds := 
		(extract(hour from ash_interval) * 3600 )
		+ (extract(second from ash_interval) * 60 )
		+ extract(second from ash_interval) ;
end;
/


select round(:v_wall_seconds,0) wall_seconds from dual;

 

Similarly the to_yminterval function is used to to perform timestamp calculations with years and months.


col clater head 'LATER' format a38
col now format a35
col today format a35
col lastyear format a38
col nextyear format a38

alter session set nls_timestamp_format = 'yyyy-mm-dd hh24.mi.ssxff';
alter session set nls_timestamp_tz_format = 'yyyy-mm-dd hh24.mi.ssxff tzr';

-- alternate methods to add 1 year
select systimestamp today, systimestamp + to_yminterval('P1Y') nextyear from dual;
select systimestamp today, systimestamp + to_yminterval('01-00') nextyear from dual;


-- alternate methods to subtract 2 months
select systimestamp now, systimestamp - to_yminterval('P2M') lastyear from dual;
select systimestamp now, systimestamp - to_yminterval('00-02') lastyear from dual;

-- alternate methods to add 2 year, 4 months, 6 days ,1 hour, 15 minutes and 42 seconds
select systimestamp now, systimestamp + to_yminterval('P2Y4M')  + to_dsinterval('P2DT1H15M42S') clater from dual;
select systimestamp now, systimestamp + to_yminterval('02-04')  + to_dsinterval('2 01:15:42') clater from dual;

TODAY                               YESTERDAY
----------------------------------- --------------------------------------
2016-03-31 09.06.22.060051 -07:00   2016-03-30 09.06.22.060051000 -07:00

TODAY                               YESTERDAY
----------------------------------- --------------------------------------
2016-03-31 09.06.22.061786 -07:00   2016-03-30 09.06.22.061786000 -07:00


NOW                                 30_Seconds_Ago
----------------------------------- --------------------------------------
2016-03-31 09.06.22.063641 -07:00   2016-03-31 09.05.52.063641000 -07:00


NOW                                 30_Seconds_Ago
----------------------------------- --------------------------------------
2016-03-31 09.06.22.064974 -07:00   2016-03-31 09.05.52.064974000 -07:00

NOW                                 1:15:42_Later
----------------------------------- --------------------------------------
2016-03-31 09.06.22.066259 -07:00   2016-03-31 10.22.04.066259000 -07:00


NOW                                 1:15:42_Later
----------------------------------- --------------------------------------
2016-03-31 09.06.22.067600 -07:00   2016-03-31 10.22.04.067600000 -07:00

While date math with the DATE data type is somewhat arcane, it is not too complex once you understand how it works.

When Oracle introduced the TIMESTAMP data type, that all changed. Timestamps are much more robust than dates, and also more complex

Timestamps bring a whole new dimension to working with dates and times; this brief introduction to working with timestamp data will help demystify the process of doing math with timestamps.

Categories: DBA Blogs

Veridata and boot.properties file

DBASolved - Mon, 2016-04-04 11:30

I’m spending some time building out an Oracle WebLogic/Veridata system at home. This is mostly for testing and general understanding purposes. One of the things I’ve noticed with the WebLogic server requires the username and password before the WebLogic server will start. After providing the name and password, the terminal window has to stay open because, closing it will kill the WebLogic server that is running. This leads to two interesting question and simple resolutions.

1. How can I automatically pass the username and password to WebLogic?
2. How to I keep WebLogic running upon closing my terminal window?

The answers to the questions are quite simple. Let’s take a look at the first question.

How can I automatically pass the username and password to WebLogic?

This is done by using a “boot.properties” file. This file needs to be placed in the security directory under the server that you want to automatically log in. In my case, I had to create a security directory and the boot.properties file under the AdminServer directory. The below steps are what I used:

$ /opt/app/oracle/product/12.2.0.1/middleware/user_projects/domains/base_domain/
$ cd ./servers
$ cd ./AdminServer
$ mkdir security
$ touch boot.properties
$ vi boot.properties

 

Once the boot.properties file is open for editing, simply put in the username and password you want the WebLogic server to use. In this case, I’m using a standard password setup.

After creating the boot.properties, file, you can go back and start the WebLogic Server and should not be prompted for a userid and password. You can see this during the startup of the Weblogic server as seen in the below image. Additionally, if you go and look at the boot.properties file, the username and password will be encrypted as well.

This same approach can be done with the Veridata components of the configuration. You just have to remember to create a security directory under the Veridata server directory as you did for the Weblogic server.

Now for the second question:

How to keep WebLogic Server running upon closing my terminal window?

With the boot.properties file in place, you can start the WebLogic/Veridata server(s) by using the “nohup” option. “Nohup” will allow you to keep the server processes running in the background.

Using these two options, you can very easily automate the starting/restarting of the WebLogic/Veridata server(s).

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate, WebLogic
Categories: DBA Blogs

Veridata and boot.properties file

DBASolved - Mon, 2016-04-04 11:30

I’m spending some time building out an Oracle WebLogic/Veridata system at home. This is mostly for testing and general understanding purposes. One of the things I’ve noticed with the WebLogic server requires the username and password before the WebLogic server will start. After providing the name and password, the terminal window has to stay open because, closing it will kill the WebLogic server that is running. This leads to two interesting question and simple resolutions.

1. How can I automatically pass the username and password to WebLogic?
2. How to I keep WebLogic running upon closing my terminal window?

The answers to the questions are quite simple. Let’s take a look at the first question.

How can I automatically pass the username and password to WebLogic?

This is done by using a “boot.properties” file. This file needs to be placed in the security directory under the server that you want to automatically log in. In my case, I had to create a security directory and the boot.properties file under the AdminServer directory. The below steps are what I used:

$ /opt/app/oracle/product/12.2.0.1/middleware/user_projects/domains/base_domain/
$ cd ./servers
$ cd ./AdminServer
$ mkdir security
$ touch boot.properties
$ vi boot.properties

 

Once the boot.properties file is open for editing, simply put in the username and password you want the WebLogic server to use. In this case, I’m using a standard password setup.

After creating the boot.properties, file, you can go back and start the WebLogic Server and should not be prompted for a userid and password. You can see this during the startup of the Weblogic server as seen in the below image. Additionally, if you go and look at the boot.properties file, the username and password will be encrypted as well.

This same approach can be done with the Veridata components of the configuration. You just have to remember to create a security directory under the Veridata server directory as you did for the Weblogic server.

Now for the second question:

How to keep WebLogic Server running upon closing my terminal window?

With the boot.properties file in place, you can start the WebLogic/Veridata server(s) by using the “nohup” option. “Nohup” will allow you to keep the server processes running in the background.

Using these two options, you can very easily automate the starting/restarting of the WebLogic/Veridata server(s).

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate, WebLogic
Categories: DBA Blogs

Migrate a SQL Server environment with complex replication without reinitializing or rebuilding replication

Pythian Group - Mon, 2016-04-04 11:14

When you have a SQL Server environment where a very complex replication setup is in place, and you need to migrate/move (without upgrading), some or all the servers involved in the replication topology to new servers/Virtual Machines or to a new Data Center/Cloud, this Blog post is for you!

Let’s assume you also have Transactional and/or Merge publications and subscriptions in place, and you need to move the publisher(s) and/or distributor(s) to a new environment. You also have one or more of the following restrictions:

  • You are not sure if the schema at the subscribers is identical to the publisher (i.e.: different indexes, different columns, etc).
  • You cannot afford downtime to reinitialize the subscriber(s)
  • There are too many subscribers to reinitialize and you cannot afford the downtime if anything goes wrong.

Here are the general steps for this migration:
Prior the migration date:

  • New instance has to have same SQL Server version and edition plus patch level as old instance. Windows version and edition can be different but you need to ensure the version of Windows supports the version of SQL Server.
  • The directory structure for the SQL Server files should be identical in the new server as old server and same permissions:
    • Same path for SQL Server binaries
    • Same path and database files names in both servers for system databases
    • Same directories where user database files and T-logs reside
    • Same path for the replication directories (when applies)
  • Copy over any instance-level objects (Logins, Linked Servers and jobs) to new instance; leave jobs disabled if applies or stop SQL Server Agent on new server

On migration date:

  • Disable any jobs, backups and maintenance that should run during the migration window on old server
  • Stop all database activity on old instance or disable logins
  • Restart old instance and verify there is no activity
  • Synchronize all replication agents that are related to the server being migrated
  • Stop and disable replication agents related to the server being migrated
  • Stop both instances
  • Copy over all system database files from old to new server
  • Copy over all user database files from old server to new one
    • Alternatively, backup all user databases on old server before stopping service and copy the files to new server
  • Shutdown old server
  • Rename new server to the name of old server and change the IP of new server to old server’s IP
  • Start the new server
  • Verify that the name of the new instance is like the old server and it’s local
  • If you backed up the user databases previously, you need to restore them to same location and file names as in old server with RECOVERY and KEEP_REPLICATION
  • Verify that all user databases are online and publications + subscribers are there
  • Start all replication agents related to the migrated server and verify replication is working properly
  • Verify that applications are able to connect to the new instance (no need to modify instance name as it is the same as before and same IP)

At any case, it is strongly recommended to test the migration prior to the real cutover, even if the test environment is not identical to Production, just to get a feel for it. Ensure that you are including most replication scenarios you have in Production during your test phase.

The more scripts you have handy for the cutover date, the less downtime you may have.

It is extremely important to also have a good and tested rollback plan.

In future Blog posts I will discuss more complex replication scenarios to be migrated and rollback plans.

If you would like to make suggestions for future blogs, please feel free to add a comment and I will try to include your request in future posts.

Categories: DBA Blogs

Why You Should Consider Moving Your Enterprise Application to the Oracle Cloud

Pythian Group - Mon, 2016-04-04 09:32

 

If you’ve decided to migrate your Oracle enterprise applications to the public cloud, it’s a good idea to consider Oracle Cloud alongside alternatives such as Amazon Web Services (AWS) and Microsoft Azure.

Oracle has made big strides in the cloud lately with platform-as-a-service (PaaS) offerings for its middleware and database software, culminating in the release of its first infrastructure-as-a-service (IaaS) offering in late 2015.

Oracle has a clear advantage over the competition when it comes to running its own applications in the cloud: it has full control over product licensing and can optimize its cloud platform for lift-and-shift migrations. This gives you a low-risk strategy for modernizing your IT portfolio.

 

What to expect from Oracle Cloud IaaS

Because Oracle’s IaaS offering is quite new, it has yet to match the flexibility and feature set of Azure and AWS. For example, enterprise VPN connectivity between cloud and on-premises infrastructure is still very much a work in progress. Unlike AWS, however, Oracle provides a free software appliance for accessing cloud storage on-premises. In addition to offering an hourly metered service, Oracle also provides unmetered compute capacity with a monthly subscription. Some customers prefer this option because it allows them to more easily control their spending through a predictable monthly fee rather than a pure pay-as-you-go model.

At the same time, Oracle Cloud IaaS has a limited selection of instance shapes, there is no SSD storage yet or guaranteed input/output performance levels, and transferring data is more challenging for large-volume migrations.

 

What to expect from Oracle Cloud PaaS

Oracle’s PaaS offerings are quickly becoming among the most comprehensive cloud-based services for Oracle Database. They include:

 

Oracle Database Schema Service

This is the entry-level unmetered offering, available starting at $175 a month for a 5GB database schema limit. Tenants share databases but are isolated in their own schemas. This means you have no control over database parameters, only the schema objects created. This service is currently available only with Oracle Database 11g Release 2 (i.e., it is not yet included in the latest release of Oracle Database 12c).

 

Oracle Exadata Cloud Service

This is a hosted service with monthly subscriptions starting at $70,000 for a quarter rack with 28 OCPUs enabled and 42TB of usable storage provisioned. You have full root OS access and SYSDBA database access, so you have total flexibility in managing your environment. However, this means Oracle manages only the bare minimum—the external networking and physical hardware—so you may end up expending the same effort as you would managing Exadata on-premises.

 

Oracle Database Virtual Image Service

This is a Linux VM with pre-installed Oracle Database software. The license is included in the rate. It’s available metered (priced per OCPU per hour of runtime) and unmetered (priced per OCPU allocated per month). As you’ll need to manage everything up from the VM level, including OS management and full DBA responsibilities, the metered service is a particularly good option for running production environments that require full control over the database deployment.

 

Oracle Database-as-a-Service (DBaaS)

This is an extension of Virtual Image Service and includes additional automation for database provisioning during service creation, backup, recovery, and patching. While you are still responsible for the complete management of the environment, the embedded automation and tooling can simplify some DBA tasks.

I should point out that, with the exception of Oracle Database Schema Service, these are not “true” PaaS offerings; they function more like IaaS-style services but with database software licenses included. But this is on the way, as Oracle recently announced plans for a fully managed DBaaS offering  similar to the one available through AWS.

 

While Oracle’s cloud options are still quite new and require additional features for broad enterprise adoption, if this option sparks your interest, now is the time to take the first steps. If you want to learn more about the migration path to Oracle Cloud, check out our white paper, Migrating Oracle Databases to Cloud.

migratingtocloud

Categories: DBA Blogs

FBDA -- 3 : Support for TRUNCATEs

Hemant K Chitale - Mon, 2016-04-04 09:13
One of the points in the previous post was how ADD and DROP Column commands are supported in FBDA.

Let's look at support for TRUNCATEs.

(Just to prove that I am not using the Undo tablespace for the AS OF query, I drop the Undo tablespace) :

SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> show parameter spfile

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +DATA/NONCDB/PARAMETERFILE/spf
ile.267.896483727
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> create undo tablespace undotbs2;

Tablespace created.

SQL> alter system set undo_tablespace='UNDOTBS2';

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> drop tablespace undotbs1 including contents and datafiles;

Tablespace dropped.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS2
SQL>


I now TRUNCATE my test table and then run an AS OF query against it.

SQL> connect hemant/hemant
Connected.
SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> truncate table test_fbda;

Table truncated.

SQL> select count(*) from test_fbda;

COUNT(*)
----------
0

SQL> select count(*) from test_fbda as of timestamp (sysdate-1);

COUNT(*)
----------
1000

SQL>


The AS OF query was satisfied by the FBDA History table  (and not from Undo).

However, because of yesterday's test of ADD and DROP columns (see yesterday's post), the table structure doesn't match AS OF yesterday.  So, I work around the difference by naming the columns.

SQL> insert into test_fbda select * from test_fbda as of timestamp (sysdate-1);
insert into test_fbda select * from test_fbda as of timestamp (sysdate-1)
*
ERROR at line 1:
ORA-00913: too many values


SQL>
SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> insert into test_fbda
2 select id_column, data_column, date_inserted
3 from test_fbda as of timestamp (sysdate-1);

1000 rows created.

SQL> commit;

Commit complete.

SQL>


Thus, when my query matches the table structure, I can use the data from the previous version of the query.

Note how I did NOT use the SYS_FBA_HIST_93250 table as I'd shown in my previous posts.  The History table is NOT to be accessed directly but I demonstrated queries against it it to show the underlying architecture in FBDA.
.
.
.



Categories: DBA Blogs

Oracle Cloud Machine - Move the Cloud to your Data Center

While public cloud computing would make a significant difference in your business, handing the governance and control to someone else isn't always simply an option. The cloud is generally perceived...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2016-04-03 [del.icio.us]

Categories: DBA Blogs

Partner Webcast – Oracle Mobile Strategy and Mobility Offerings Overview

Mobile is on the mind of every business, as Mobile Is the New First Screen. Enabling existing business applications on handhelds can be very challenging, but even building new business applications...

We share our skills to maximize your revenue!
Categories: DBA Blogs

FBDA -- 2 : FBDA Archive Table Structure

Hemant K Chitale - Sun, 2016-04-03 10:10
Following up on my earlier post, I look at the FBDA Archive Tables.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 3 23:26:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:32:30 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL>
SQL> desc sys_fba_hist_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> desc sys_fba_ddl_colmap_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
COLUMN_NAME VARCHAR2(255)
TYPE VARCHAR2(255)
HISTORICAL_COLUMN_NAME VARCHAR2(255)

SQL> desc sys_fba_tcrv_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OP VARCHAR2(1)

SQL>


The HIST table is the History table for my active table. It adds columns that track Rowid, Start SCB and End SCN for a range of rows that are copied into the History Table, Transaction Identifier, Operation and then the actual columns of the active table.
The DDL_COLMAP table seems to track Column Mappings.  See example below.
The TCRV table seems to be tracking Transactions ?

Let's try some DDL to ADD and DROP columns to the active table.

SQL> alter table test_fbda add (new_col_1 varchar2(5));

Table altered.

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
NEW_COL_1 VARCHAR2(5)

SQL> desc sys_fba_93250
ERROR:
ORA-04043: object sys_fba_93250 does not exist


SQL> desc sys_fba_hist_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
NEW_COL_1 VARCHAR2(5)

SQL>
SQL> select * from sys_fba_ddl_colmap_93250
2 /

STARTSCN ENDSCN XID O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
1697151
ID_COLUMN
NUMBER
ID_COLUMN

1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

1697151
DATE_INSERTED
DATE
DATE_INSERTED

1728713
NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL>


The new column added to the active table is also now reflected in the History Table.  The DDL_COLMAP shows the effective start of each column (notice the STARTSCN mapped to the COLUMN_NAME)

Let's set some values in new column and see if they appear in the History Table.

SQL> update test_fbda set new_col_1 = 'New'
2 where id_column < 6;

5 rows updated.

SQL> commit;

Commit complete.

SQL> select id_column, new_col_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
2 from sys_fba_hist_93250
3 where id_column < 6
4 order by 1,3;

ID_COLUMN NEW_C
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM


SQL>


What rows are copied into the History Table are *prior* image rows (copied from the Undo Area).  The STARTSCN and ENDSCN are of *yesterday* (02-April).

Let me DROP the new column.

SQL> alter table test_fbda drop (new_col_1);

Table altered.

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> desc sys_fba_hist_93250;
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
D_1729869_NEW_COL_1 VARCHAR2(5)

SQL>
SQL> select * from sys_fba_ddl_colmap_93250;

STARTSCN ENDSCN XID O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
1697151
ID_COLUMN
NUMBER
ID_COLUMN

1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

1697151
DATE_INSERTED
DATE
DATE_INSERTED

1728713 1729869
D_1729869_NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL>


The dropped column is no longer in the active table  and has been renamed in the History table.  (The data in the column has to be preserved but the column is renamed).  Notice how the DDL_COLMAP table now shows an ENDSCN for this column, with the new (renamed)  column as in the History table.  The column name seems to include the SCN (ENDSCN ?)

Let's confirm what data is now present in the History table  (remember : Our earlier query showed the pre-update image for this column).

SQL> select id_column, D_1729869_NEW_COL_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
2 from sys_fba_hist_93250
3 where (id_column < 6 OR D_1729869_NEW_COL_1 is not null)
4 order by 1,3;

ID_COLUMN D_172
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

1
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

1 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

2
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

2 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

3
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

3 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

4
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

4 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

5
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

5 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM


15 rows selected.

SQL>
SQL> select scn_to_timestamp(1729869) from dual;

SCN_TO_TIMESTAMP(1729869)
---------------------------------------------------------------------------
03-APR-16 11.45.27.000000000 PM

SQL>


Why do we now have 3 rows in the History table for each row in the Active Table ?  Take ID_COLUMN=1.  The first row -- for the time range 02-Apr 11:32pm to 02-Apr 11:46pm--  is as of yesterday, the same row we saw in the History table after the update in the active table.  The second row is the representation to preserve the row for the time rang 02-Apr 11:46pm to 03-Apr 11:41pm to support AS OF queries upto the time of the UPDATE.  The third row for the time range 03-Apr 11:41pm to 03-Apr 11:45pm is to present the UPDATEd value ('New') in the column upto the last transaction updating it before the column was dropped at 03-Apr 11:45:27pm.

Thus, Oracle maintains multiple versions of the same row, including versions for DROPped columns, in the History Table.

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.  The proper query against the active table would be an AS OF query which is automatically rewritten / redirected to "hit" the History table when necessary.

What about the third table table -- the TCRV table ?

SQL> l
1 select scn_to_timestamp(startscn), op , count(*)
2 from sys_fba_tcrv_93250
3 group by scn_to_timestamp(startscn), op
4* order by 2,1
SQL> /

SCN_TO_TIMESTAMP(STARTSCN) O
--------------------------------------------------------------------------- -
COUNT(*)
----------
03-APR-16 11.45.24.000000000 PM U
1000


SQL>
SQL> select count(distinct(rid)) from sys_fba_tcrv_93250;

COUNT(DISTINCT(RID))
--------------------
1000

SQL>



It shows 1000 rows has having been UPDATEd ? (Does OP='U' mean 'UPDATE). We do know that ADD and DROP column are changes to the table.  But are they UPDATEs ?

Next post : Continuing with DML operations (more rows, some updates).  We'll see if we can decipher anything rom the TCRV table as well. Changed to showing support for TRUNCATEs.
.
.
.



Categories: DBA Blogs

FBDA -- 1 : Testing Flashback Data Archive in 12c (NonCDB)

Hemant K Chitale - Sat, 2016-04-02 09:53
Note : At the bottom of this post, you'll find links to more (subsequent) posts on this topic.

Some testing I'd done with Flashback Data Archive (henceforth called FBDA in this and subsequent posts, if any) in 11.2.0.4 left me with uncertainty about the automatic purging of data beyond the Retention Period.  I might return to testing 11.2.0.4, but here I shall begin testing in 12.1.0.2  (NonCDB).

Setting up FBDA :

[oracle@ora12102 ~]$ sqlplus system/oracle

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 2 23:23:53 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:20:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create tablespace fbda ;

Tablespace created.

SQL> create flashback archive fbda tablespace fbda retention 3 day;

Flashback archive created.

SQL> create tablespace hemant;

Tablespace created.

SQL> create user hemant identified by hemant
2 default tablespace hemant;

User created.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant create session to hemant;

Grant succeeded.

SQL> alter user hemant quota unlimited on hemant;

User altered.

SQL> alter user hemant quota unlimited on fbda;

User altered.

SQL> grant flashback archive administer to hemant;

Grant succeeded.

SQL> grant flashback archive on fbda to hemant;

Grant succeeded.

SQL>
SQL> connect / as sysdba
Connected.
SQL> grant execute on dbms_flashback_archive to hemant;

Grant succeeded.

SQL>
SQL> connect hemant/hemant
Connected.
SQL> create table test_fbda (id_column number, data_column varchar2(15), date_inserted date) tablespace hemant;

Table created.

SQL> alter table test_fbda flashback archive fbda;

Table altered.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL>


Note the Flashback Archive history table corresponding to TEST_FBDA doesn't get created immediately.

SQL> connect hemant/hemant
Connected.
SQL> insert into test_fbda
2 select rownum , to_char(rownum), trunc(sysdate)
3 from dual connect by level < 1001;

1000 rows created.

SQL> commit;

Commit complete.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> select flashback_archive_name, retention_in_days, status
2 from user_flashback_archive;

FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
RETENTION_IN_DAYS STATUS
----------------- -------
FBDA
3


SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables;

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA
FBDA
SYS_FBA_HIST_93250 ENABLED


SQL>
SQL> !sleep 300

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL>
SQL> select object_id
2 from user_objects
3 where object_name = 'TEST_FBDA'
4 and object_type = 'TABLE'
5
SQL> /

OBJECT_ID
----------
93250

SQL>


So, it took some time for the flashback archive history table (identified on the basis of the OBJECT_ID) to appear.  The background fbda process seems to run (wakeup) every 5minutes although it may wake up more frequently if there is more activity in the database.

SQL> select trunc(date_inserted), count(*)
2 from test_fbda
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 1000

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL> update test_fbda
2 set data_column = data_column
3 where rownum < 451;

450 rows updated.

SQL> commit;

Commit complete.

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL>
SQL> !sleep 180

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 450

SQL>
SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted
4 order by 1;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
DATE_INSE COUNT(*)
--------- ----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
02-APR-16 450


SQL>


Notice that not all 1000 rows got copied to the FBDA.  Only the 450 rows that I updated were copied in.  They are tracked by SCN-Timestamp.  (The "DATE_INSERTED" column is my own date column, Oracle wouldn't be using that column to track DML dates for rows as the values in that column are controlled by me -- the application or developer, not Oracle).

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.

Tomorrow :  More Rows, and some DDLs as well.

Post 2 here.  It covers some of the architectural components and support for ADD / DROP column DDL.

Post 3 here.  It shows support for TRUNCATEs.

Post 4 here.  On Partitions and Indexes.

Post 5 here.  On (Auto)Purging.

Post 6 here.  On Bug Notes
.
.
.


Categories: DBA Blogs

sql for first day of month and last day of month

Learn DB Concepts with me... - Fri, 2016-04-01 19:00

select SYSDATE ,
last_day(sysdate) as LAST_DATE_CURR_MNTH,
ADD_MONTHS(last_day(sysdate),-1) as PREVIOUS_MON_LAST_DATE,
last_day(sysdate)+1 as NEXT_MON_FIRST_DATE,
ADD_MONTHS(last_day(sysdate),+1) as NEXT_MON_LAST_DATE,
ADD_MONTHS(last_day(sysdate),+5) as LAST_DATE_OF_5TH_MON,
ADD_MONTHS(last_day(sysdate),+5) +1 as FIRST_DATE_IN_6TH_MON_AFTR_NOW  
from dual;

"SYSDATE"    "LAST_DATE_CURR_MNTH"    "PREVIOUS_MON_LAST_DATE"    "NEXT_MON_FIRST_DATE"    "NEXT_MON_LAST_DATE"    "LAST_DATE_OF_5TH_MON"   
-----------  ---------------------   ------------------------    ---------------------   --------------------    -----------------------
"FIRST_DATE_IN_6TH_MON_AFTR_NOW"
-----------------------

01-APR-16        30-APR-16                31-MAR-16                    01-MAY-16                31-MAY-16            30-SEP-16   
-----------------------
01-OCT-16
Categories: DBA Blogs

Best practice for setting up MySQL replication filters

Pythian Group - Fri, 2016-04-01 13:23

It is not uncommon that we need to filter out some DBs or Tables while setting up replication. It is important to understand how MySQL evaluates/process the replication filtering rules to avoid the conflicting or confusion while we setting them up.The purpose of this blog is to illustrate the rules and provide some suggestions for best practice.

MySQL provides 3 levels of filters for setting up replication: Binary log, DB and Table. The binlog filters apply on the master to control how to log the changes. Since MySQL replication is based on the binlog, it is the first level filter and has the highest priority. While the DB-level and Table-level filters apply on the slaves, since each table belongs to a schema, the DB-level filters have higher priority than the Table-level ones. Inside the Table-level filters, MySQl will evaluate the options in the order of: –replicate-do-table, –replicate-ignore-table ,  –replicate-wild-do-table , –replicate-wild-ignore-table.

Based on that, we have the following suggestions for setting up MySQL replication filter as best practice:

I)Do not setup any binlog-level filters unless you really need to and can afford losing the chance of  having an extra full copy of data changes for the master.

II)In DB-level filters, use either one or none of the two options: –replicate-do-db or –replicate-ignore-db. Never use both at the same time.

III) While using binlog_format=’statement’ OR ‘mixed’ (in mixed mode, if  a transaction is deterministic then it will be stored in statement format) and set up –replicate-do-db or –replicate-ignore-db on slaves, make sure never make changes on the tables across the default database on master otherwise you might lose the changes on slave due to default database not matching.

IV)In Table-level filters, use only one of the 2 options, or use the following two combination: –replicate-ignore-table and —replicate-wild-do-table to avoid conflicting and confusing.

For MariaDB replication filters within Galera cluster, it should be used with caution. As a general rule except for InnoDB DML updates, the following replication filters are not honored in a Galera cluster :  binlog-do-db ,binlog-ignore-db, replicate-wild-do-db, replicate-wild-ignore-db. However, replicate-do-db,replicate-ignore-db filters are honored for DDL and DML for both InnoDB & MyISAM engines. As they might create discrepancies and replication may abort (see MDEV-421, MDEV-6229). (https://mariadb.com/kb/en/mariadb/mariadb-galera-cluster-known-limitations/), For the slaves replicating from cluster, the rules are similar with normal replication settings as above.

Here are the details/reasons:

1)Binlog-level filters

A)How MySQL process the Binlog-level filters

There are 2 options for setting binlog filter on master:  –binlog-do-db and –binlog-ignore-db. MySQL will check –binlog-do-db first, if there are any options, it will apply this one and ignore –binlog-ignore-db. If the –binlog-do-db is NOT set, then mysql will check –binlog-ignore-db.If both of them are empty, it will log changes for all DBs.

See the below examples. In scenario 1) no binlog level filters are set and so all changes were logged; In scenario 2) -binlog-do-db and –binlog-ignore-db are all set to m_test and changes on the DB m_test were logged and changes on the DB test were NOT logged;In scenario 3) only –binlog-ignore-db is set to m_test and so changes on the DB m_test were NOT logged and changes on the DB test were  logged;
scenario 1)–binlog-do-db and –binlog-ignore-db is NOT set:

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000003 |      120 |              |                  |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

mysql> show binlog events in “vm-01-bin.000003” from 120;  

Empty set (0.00 sec)

mysql> insert into t1(id,insert_time) values(10,now());

Query OK, 1 row affected (0.05 sec)

 

mysql> show binlog events in “vm-01-bin.000003” from 120;

+——————+—–+————+———–+————-+—————————————————————+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                          |

+——————+—–+————+———–+————-+—————————————————————+

| vm-01-bin.000003 | 120 | Query      |         1 |         211 | BEGIN                                                         |

| vm-01-bin.000003 | 211 | Query      |         1 |         344 | use `m_test`; insert into t1(id,insert_time) values(10,now()) |

| vm-01-bin.000003 | 344 | Xid        |         1 |         375 | COMMIT /* xid=17 */                                           |

+——————+—–+————+———–+————-+—————————————————————+

3 rows in set (0.00 sec)

scenario 2)–binlog-do-db=m_test and –binlog-ignore-db=m_test:

— insert into tables of DB m_test was logged

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000004 |      656 | m_test       | m_test           |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

 

mysql> use m_test

 

mysql> insert into t1(insert_time) values(now());

Query OK, 1 row affected (0.02 sec)

 

mysql> show binlog events in “vm-01-bin.000004” from 656;

+——————+—–+————+———–+————-+———————————————————+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                    |

+——————+—–+————+———–+————-+———————————————————+

| vm-01-bin.000004 | 656 | Query      |         1 |         747 | BEGIN                                                   |

| vm-01-bin.000004 | 747 | Intvar     |         1 |         779 | INSERT_ID=13                                            |

| vm-01-bin.000004 | 779 | Query      |         1 |         906 | use `m_test`; insert into t1(insert_time) values(now()) |

| vm-01-bin.000004 | 906 | Xid        |         1 |         937 | COMMIT /* xid=26 */                                     |

+——————+—–+————+———–+————-+———————————————————+

4 rows in set (0.00 sec)

— insert into tables of DB test was NOT logged

mysql> use test;

 

mysql> show master status ;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000004 |      937 | m_test       | m_test           |                   |

+——————+———-+————–+——————+——————-+

 

mysql> insert into t1(`a`) values(‘ab’);

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000004” from 937;

Empty set (0.00 sec)

 

scenario 3)–Binlog_Do_DB=null –binlog-ignore-db=m_test:

mysql> use m_test

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000005 |      120 |              | m_test           |                   |

+——————+———-+————–+——————+——————-+

mysql> insert into t1(insert_time) values(now());

Query OK, 1 row affected (0.01 sec)

 

mysql> show binlog events in “vm-01-bin.000005” from 120;

Empty set (0.00 sec)

 

mysql> use test

mysql> insert into t1(`a`) values(‘ba’);

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000005” from 120;

+——————+—–+————+———–+————-+———————————————-+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                         |

+——————+—–+————+———–+————-+———————————————-+

| vm-01-bin.000005 | 120 | Query      |         1 |         199 | BEGIN                                        |

| vm-01-bin.000005 | 199 | Query      |         1 |         305 | use `test`; insert into t1(`a`) values(‘ba’) |

| vm-01-bin.000005 | 305 | Xid        |         1 |         336 | COMMIT /* xid=22 */                          |

+——————+—–+————+———–+————-+———————————————-+

3 rows in set (0.00 sec)

 

B)Best practice for setting up the Binlog-level filters

So, for Binlog-level filter, we will use either one (and ONLY one or none) of the 2 options: –binlog-do-db to make MySQL log changes for the DBs in the list. OR, –binlog-ignore-db to make MySQL log changes for the DBs NOT in the list. Or leave both of them empty to log changes for all the DBs.

However, we usually recommend NOT to setup any binlog-level filters. The reason is that to log changes for all DBs and set up filters only on slaves will achieve the same purpose and let us have an extra full copy of data changes for the master, in case we will need that for recovery.

 

2)DB-level filters

A)How MySQL process the DB-level filters

There are 2 options for setting DB-level filters:  –replicate-do-db or –replicate-ignore-db. MySQL processes these two filters the similar way as it processes the Binlog-level filters, the difference is that it ONLY applies on the slaves and so affects how the slaves replicate from its master. It will check –replicate-do-db first, if there are any options, it will replicate the DBs in the list and ignore –replicate-ignore-db. If the –replicate-do-db is NOT set, then mysql will check –replicate-ignore-db and replicate all the DBs except for the ones in this list.If both of them are empty, it will replicate all the DBs. you can find the process in the below chart from http://dev.mysql.com/doc/refman/5.7/en/replication-rules-db-options.html

There is a trick for DB-level filters though If the binlog_format is set as statement or mixed. (The binlog_format =mixed also applies here, it is because that  in mixed mode replication, in case the transaction  is deterministic it will be resolved to statement which is equivalent to statement mode) .. Since “With statement-based replication, the default database is checked for a match.” (http://dev.mysql.com/doc/refman/5.7/en/replication-rules-db-options.html). If you set up –replicate-do-db and you update a table out of the default database in master, the update statement will not be replicated if the default database you are running command from is not in the  –replicate-do-db. For example, there are 2 DBs in master, you set binlog_format=’statement’ OR ‘mixed’ and set –replicate-do-db=DB1 on slave. when execute the following commands: use DB2; update DB1.t1 … This update command will not be executed on slave. To make the update statement replicated to slave, you need to do: use DB1, update t1 …

For example: with binlog_format=statement or binlog_format=mixed,  we insert into m_test.t1 in two approaches: one is using default DB as m_test, the other one is using default DB test, the changes are all logged in the master. But in slave, after it caught up, only the insert(default DB is m_test) was replicated to slave, and the insert (default DB is test) was NOT replicated. As shown below:

Scenario 1) binlog_format=statement

In master: insert into m_test.t1 in two approaches: one is using default DB as m_test, the other one is using default DB test, the changes are all logged

mysql> use m_test

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> delete from t1;

Query OK, 16 rows affected (0.02 sec)

 

mysql> select * from m_test.t1;

Empty set (0.00 sec)

 

mysql> use m_test

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> use test;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000006” from 654;

+——————+——+————+———–+————-+—————————————————————-+

| Log_name         | Pos  | Event_type | Server_id | End_log_pos | Info                                                           |

+——————+——+————+———–+————-+—————————————————————-+

| vm-01-bin.000006 |  654 | Xid        |         1 |         685 | COMMIT /* xid=39 */                                            |

| vm-01-bin.000006 |  685 | Query      |         1 |         768 | BEGIN                                                          |

| vm-01-bin.000006 |  768 | Query      |         1 |         860 | use `m_test`; delete from t1                                   |

| vm-01-bin.000006 |  860 | Xid        |         1 |         891 | COMMIT /* xid=48 */                                            |

| vm-01-bin.000006 |  891 | Query      |         1 |         982 | BEGIN                                                          |

| vm-01-bin.000006 |  982 | Intvar     |         1 |        1014 | INSERT_ID=17                                                   |

| vm-01-bin.000006 | 1014 | Query      |         1 |        1148 | use `m_test`; insert into m_test.t1(insert_time) values(now()) |

| vm-01-bin.000006 | 1148 | Xid        |         1 |        1179 | COMMIT /* xid=52 */                                            |

| vm-01-bin.000006 | 1179 | Query      |         1 |        1268 | BEGIN                                                          |

| vm-01-bin.000006 | 1268 | Intvar     |         1 |        1300 | INSERT_ID=18                                                   |

| vm-01-bin.000006 | 1300 | Query      |         1 |        1432 | use `test`; insert into m_test.t1(insert_time) values(now())   |

| vm-01-bin.000006 | 1432 | Xid        |         1 |        1463 | COMMIT /* xid=60 */                                            |

+——————+——+————+———–+————-+—————————————————————-+

12 rows in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 18 | 2016-03-20 15:00:01 |

+—-+———————+

2 rows in set (0.00 sec)

 

In slave: after it caught up, only the first insert(default DB is m_test) was replicated to slave, and the insert (default DB is test) was NOT replicated

mysql> show slave status\G

*************************** 1. row ***************************

              Slave_IO_State: Waiting for master to send event

                 Master_Host: 10.0.2.6

                 Master_User: repl

                 Master_Port: 3306

               Connect_Retry: 10

             Master_Log_File: vm-01-bin.000006

         Read_Master_Log_Pos: 1463

              Relay_Log_File: ewang-vm-03-relay-bin.000017

               Relay_Log_Pos: 1626

       Relay_Master_Log_File: vm-01-bin.000006

            Slave_IO_Running: Yes

           Slave_SQL_Running: Yes

             Replicate_Do_DB: m_test

         Replicate_Ignore_DB:

          Replicate_Do_Table:

      Replicate_Ignore_Table:

     Replicate_Wild_Do_Table:

 Replicate_Wild_Ignore_Table:

                  Last_Errno: 0

                  Last_Error:

                Skip_Counter: 0

         Exec_Master_Log_Pos: 1463

             Relay_Log_Space: 1805

             Until_Condition: None

              Until_Log_File:

               Until_Log_Pos: 0

          Master_SSL_Allowed: No

          Master_SSL_CA_File:

          Master_SSL_CA_Path:

             Master_SSL_Cert:

           Master_SSL_Cipher:

              Master_SSL_Key:

       Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

               Last_IO_Errno: 0

               Last_IO_Error:

              Last_SQL_Errno: 0

              Last_SQL_Error:

 Replicate_Ignore_Server_Ids:

            Master_Server_Id: 1

                 Master_UUID: a22b3fb2-5e70-11e5-b55a-0800279d00c5

            Master_Info_File: /mysql/data/master.info

                   SQL_Delay: 0

         SQL_Remaining_Delay: NULL

     Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

          Master_Retry_Count: 86400

                 Master_Bind:

     Last_IO_Error_Timestamp:

    Last_SQL_Error_Timestamp:

              Master_SSL_Crl:

          Master_SSL_Crlpath:

          Retrieved_Gtid_Set:

           Executed_Gtid_Set:

               Auto_Position: 0

1 row in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

+—-+———————+

1 row in set (0.00 sec)

 

Scenario 2) binlog_format=mixed

In master:

mysql> show variables like ‘binlog_format’;

+—————+——-+

| Variable_name | Value |

+—————+——-+

| binlog_format | MIXED |

+—————+——-+

1 row in set (0.00 sec)

 

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000007 |      120 |              |                  |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

 

mysql> use m_test

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> use test;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> show binlog events in “vm-01-bin.000007” from 120;

+——————+—–+————+———–+————-+—————————————————————-+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                           |

+——————+—–+————+———–+————-+—————————————————————-+

| vm-01-bin.000007 | 120 | Query      |         1 |         211 | BEGIN                                                          |

| vm-01-bin.000007 | 211 | Intvar     |         1 |         243 | INSERT_ID=19                                                   |

| vm-01-bin.000007 | 243 | Query      |         1 |         377 | use `m_test`; insert into m_test.t1(insert_time) values(now()) |

| vm-01-bin.000007 | 377 | Xid        |         1 |         408 | COMMIT /* xid=45 */                                            |

| vm-01-bin.000007 | 408 | Query      |         1 |         497 | BEGIN                                                          |

| vm-01-bin.000007 | 497 | Intvar     |         1 |         529 | INSERT_ID=20                                                   |

| vm-01-bin.000007 | 529 | Query      |         1 |         661 | use `test`; insert into m_test.t1(insert_time) values(now())   |

| vm-01-bin.000007 | 661 | Xid        |         1 |         692 | COMMIT /* xid=53 */                                            |

+——————+—–+————+———–+————-+—————————————————————-+

8 rows in set (0.00 sec)

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 18 | 2016-03-20 15:00:01 |

| 19 | 2016-03-20 15:09:14 |

| 20 | 2016-03-20 15:09:25 |

+—-+———————+

4 rows in set (0.00 sec)

 

In slave:

mysql> show variables like ‘binlog_format’;

+—————+——-+

| Variable_name | Value |

+—————+——-+

| binlog_format | MIXED |

+—————+——-+

1 row in set (0.00 sec)

 

mysql> show slave status\G

*************************** 1. row ***************************

              Slave_IO_State: Waiting for master to send event

                 Master_Host: 10.0.2.6

                 Master_User: repl

                 Master_Port: 3306

               Connect_Retry: 10

             Master_Log_File: vm-01-bin.000007

         Read_Master_Log_Pos: 692

              Relay_Log_File: ewang-vm-03-relay-bin.000023

               Relay_Log_Pos: 855

       Relay_Master_Log_File: vm-01-bin.000007

            Slave_IO_Running: Yes

           Slave_SQL_Running: Yes

             Replicate_Do_DB: m_test

         Replicate_Ignore_DB:

          Replicate_Do_Table:

      Replicate_Ignore_Table:

     Replicate_Wild_Do_Table:

 Replicate_Wild_Ignore_Table:

                  Last_Errno: 0

                  Last_Error:

                Skip_Counter: 0

         Exec_Master_Log_Pos: 692

             Relay_Log_Space: 1034

             Until_Condition: None

              Until_Log_File:

               Until_Log_Pos: 0

          Master_SSL_Allowed: No

          Master_SSL_CA_File:

          Master_SSL_CA_Path:

             Master_SSL_Cert:

           Master_SSL_Cipher:

              Master_SSL_Key:

       Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

               Last_IO_Errno: 0

               Last_IO_Error:

              Last_SQL_Errno: 0

              Last_SQL_Error:

 Replicate_Ignore_Server_Ids:

            Master_Server_Id: 1

                 Master_UUID: a22b3fb2-5e70-11e5-b55a-0800279d00c5

            Master_Info_File: /mysql/data/master.info

                   SQL_Delay: 0

         SQL_Remaining_Delay: NULL

     Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

          Master_Retry_Count: 86400

                 Master_Bind:

     Last_IO_Error_Timestamp:

    Last_SQL_Error_Timestamp:

              Master_SSL_Crl:

          Master_SSL_Crlpath:

          Retrieved_Gtid_Set:

           Executed_Gtid_Set:

               Auto_Position: 0

1 row in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 19 | 2016-03-20 15:09:14 |

+—-+———————+

2 rows in set (0.00 sec)

 

B)Best practice for setting up the DB-level filters

Use either one or none of the two options: –replicate-do-db or –replicate-ignore-db. Never use both at the same time.

If you use binlog_format=’statement’  OR ‘mixed’ and set up –replicate-do-db or –replicate-ignore-db on slaves, make sure never make changes on the tables across the default database, otherwise the data discrepancy will be expected in the slaves.

 

3)Table-level filters

There are 4 options for setting Table-level filters: –replicate-do-table, –replicate-ignore-table ,  –replicate-wild-do-table or –replicate-wild-ignore-table. MySQL evaluates the options in order. you can find the process in the below chart from http://dev.mysql.com/doc/refman/5.6/en/replication-rules-table-options.html

 

The above chart shows us that MySQL will first check –replicate-do-table, the tables listed here will be replicated and so won’t be ignored by the following options like –replicate-ignore-table , or –replicate-wild-ignore-table. Then MySQL will check –replicate-ignore-table, the tables listed here will be ignored even if it shows up in the following options  –replicate-wild-do-table. The lowest priority is –replicate-wild-ignore-table.

B)Best practice for setting up the Table-level filters

Due to the priorities for the 4 Table_level options, to avoid confusing/conflicting, we suggest using only one of the 4 options, or using the following two options: –replicate-ignore-table and replicate-wild-do-table so that it is clearly that the tables in –replicate-ignore-table will be ignored and the tables in replicate-wild-do-table will be replicated.

 

Categories: DBA Blogs

Deploying your Oracle MAF Apps on Windows Platform

As you may already know Oracle Mobile Application Framework (MAF) 2.3 has been released. And one of the symbolic features is support for Universal Windows Platform (UWP). This means that starting...

We share our skills to maximize your revenue!
Categories: DBA Blogs

What Are Your Options For Migrating Enterprise Applications to the Cloud?

Pythian Group - Fri, 2016-04-01 08:16

Migrating your enterprise applications from on-premises infrastructure to the public cloud is attractive for a number of reasons. It eliminates the costs and complexities of provisioning hardware and managing servers, storage devices, and network infrastructure; it gives you more compute capacity per dollar without upfront capital investment; and you gain opportunities for innovation through easier access to new technologies, such as advanced analytical capabilities.

So how do you get there?

You have a few options. At one end of the spectrum, you could simply wait and rationalize, making continuous incremental changes to gain efficiencies. This is obviously a “slow burn” approach. In the middle is a “lift-and-shift” from your current environment into the public cloud. And at the far extreme, you could plunge right in and re-architect your applications—a costly and probably highly complex task.

 

In fact, a true migration “strategy” will involve elements of each of these. For example, you could perform short-term optimizations and migrations on a subset of applications that are ready for the cloud, while transforming the rest of your application stack over the longer term.

 

What to expect from the major public cloud platforms

There are three leading public cloud platforms: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). As Google doesn’t seem to be driving customers to lift-and-shift their applications to GCP, I’m going to focus on AWS and Azure as potential cloud destinations and, for specificity, take Oracle enterprise databases as the use case.

 

Amazon Web Services

You have two options for migrating Oracle databases to the AWS cloud: infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS).

 

Deploying Oracle applications in AWS IaaS is much like deploying them on your in-house infrastructure. You don’t get flexible licensing options, but you do have the ability to easily allocate more or less capacity as needed for CPU, memory, and storage. However, because AWS IaaS is virtualized infrastructure, you may experience slower performance due to suboptimal CPU core allocation or processor caches. You’ll also have less flexibility with instance sizes, network topology, storage performance tiers, and the like.

 

AWS Relational Database Service (RDS) for Oracle is a managed PaaS offering where, in addition to giving you the benefits of IaaS, Amazon takes on major DBA and system administrator tasks including provisioning, upgrades, backups, and multi-availability zone replication. This significantly simplifies your operations—but also results in less control over areas such as configuration, patching, and maintenance windows. AWS RDS for Oracle can also be used with a pay-as-you-go licensing model included in the hourly rate.

 

Microsoft Azure

Azure does not have a managed offering for Oracle databases, so the only way to run Oracle Database on Azure is through its IaaS platform. The benefits are very similar to AWS IaaS, but Azure offers additional licensing options (with Windows-based license-included images) and its instances are billed by the minute rather than by the hour. What’s important to keep in mind is that Azure is not as broadly adopted as AWS and offers less flexibility for storage performance tiers and instance sizes. Oracle Database software running on Windows is also not as common as running on Linux.

 

For more in-depth technical details on these options, I encourage you to read our white paper, Migrating Oracle Databases to Cloud. My next blog in this series will look at one other option not discussed here: migrating to Oracle Cloud.

migratingtocloud

Categories: DBA Blogs

CHANGE STANDBY DATABASE PROTECTION MODE

Learn DB Concepts with me... - Fri, 2016-04-01 08:00
SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM PERFORMANCE

SQL> show parameter log_archive_dest_2

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2             string     SERVICE=ORCLSTB1 NOAFFIRM ASYN
                         C VALID_FOR=(ONLINE_LOGFILES,P
                         RIMARY_ROLE) DB_UNIQUE_NAME=OR
                         CLSTB1
log_archive_dest_20             string
log_archive_dest_21             string
log_archive_dest_22             string
log_archive_dest_23             string
log_archive_dest_24             string
log_archive_dest_25             string
log_archive_dest_26             string

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_27             string
log_archive_dest_28             string
log_archive_dest_29             string
SQL> show parameter db_unique_name

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                 string     ORCL
SQL> show parameter log_archive_config

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_config             string     dg_config=(ORCL,ORCLSTB1,ORCLS
                         TB2)
SQL> alter system set log_archive_dest_2='SERVICE=ORCLSTB1 NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLSTB1';

System altered.

SQL> show parameter log_archive_dest_2

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2             string     SERVICE=ORCLSTB1 NOAFFIRM ASYN
                         C VALID_FOR=(ONLINE_LOGFILES,P
                         RIMARY_ROLE) DB_UNIQUE_NAME=OR
                         CLSTB1
log_archive_dest_20             string
log_archive_dest_21             string
log_archive_dest_22             string
log_archive_dest_23             string
log_archive_dest_24             string
log_archive_dest_25             string
log_archive_dest_26             string

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_27             string
log_archive_dest_28             string
log_archive_dest_29             string

SQL> alter system set log_archive_dest_2='SERVICE=ORCLSTB1 NOAFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLSTB1';

System altered.


SQL> alter database set standby database to maximize availability;

Database altered.

NOTE: You don’t need to shutdown your instance, when you are changing  protection mode from MAXIMUM PERFORMANCE TO MAXIMUM AVAILABILITY.But you need to if you are going to MAXIMUM PROTECTION.

SQL> alter system switch logfile;

System altered.

SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY

SQL> archive log list;
Database log mode           Archive Mode
Automatic archival           Enabled
Archive destination           /u01/app/oracle/oraarch/
Oldest online log sequence     239
Next log sequence to archive   241
Current log sequence           241
SQL> select group#,bytes/1024/1024 from v$standby_log;

    GROUP# BYTES/1024/1024
---------- ---------------
     4        52
     5        52
     6        52
     7        52


SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
     1          240

SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY



SQL> alter system switch logfile;

System altered.

Categories: DBA Blogs