Skip navigation.

DBA Blogs

Oracle University Instructors on the Cruise Ship

The Oracle Instructor - Wed, 2015-03-04 14:14

Oracle User Group Norway Annual ConferenceI’m really looking forward to speak at the Oracle User Group Norway Spring Seminar 2015, together with my dear colleague Joel Goodman! For sure it’s one of the highlights this year in terms of Oracle Events.

Joel will present about Oracle Automatic Parallel Execution on MAR-12, 6pm and about Oracle 12c Automatic Data Optimization and Heat Map on MAR-13, 9:30am

Yours sincerely will talk about The Data Guard Broker – Why it is recommended on MAR-12, 6pm and about The Recovery Area – Why it is recommended on MAR-13, 8:30am

Joel Goodman & Uwe Hesse

The OUGN board has again gathered an amazing lineup of top-notch speakers for this event, so I will gladly take the opportunity to improve my knowledge :-)


Tagged: #ougn2015
Categories: DBA Blogs

Oracle APEX_WEB_SERVICE REST API call

Kubilay Çilkara - Wed, 2015-03-04 12:15
In this post I will try to show you how I used the Oracle Apex and the APEX_WEB_SERVICE  PL/SQL package to quickly send a request to a public Internet API and how I handled the response. The code below was written during a 'Hackday' and hasn't been extensively tested.

My use case is integrating Oracle Apex with the public Mendeley REST API for Mendeley Catalog Search.

The idea was to build an application in Oracle Apex to query the Mendeley REST API Catalog with a keyword. Mendeley REST API gives JSON response so I used PL/JSON to parse it.  I hear in Oracle 12c JSON is going to be a native data-type. My Oracle Apex host is running Oracle 11g and I had to use PL/JSON for ease.

To cut it short here is how the Mendeley Catalog Search on Oracle Apex application look  like. (Click image to go to app or visit http://apex2.enciva.co.uk/apex/f?p=864:2






To integrate with Mendeley REST API from Oracle Apex, I used one PL/SQL function and one procedure.

I used the function to obtain the Mendeley REST API Client Credentials Authorisation flow token and the procedure to do make the API request to Mendeley Catalog Search and to handle the response.

Here is the MENDELEY_CALL PL/SQL function I created:

This function returns the Client Credentials Authorisation Flow token from the Mendeeley REST API

create or replace function mendeley_call (p_id in varchar2)
return varchar2
is
v_token varchar2(1000);
token varchar2(1000);
jtoken json;
v_grant_type varchar2(400) := 'client_credentials';
v_client_id varchar2(500) := p_id;
v_client_secret varchar2(500) := '<put_your_mendeley_client_secret_here>';
v_scope varchar2(300) := 'all';
begin
/*----------Setting Headers----------------------------------------*/                                      
apex_web_service.g_request_headers(1).name := 'Content-Type';
apex_web_service.g_request_headers(1).Value := 'application/x-www-form-urlencoded; charset=utf-8';
/*-----------------------------------------------------------------*/

token := apex_web_service.make_rest_request
    (
      p_url         => 'https://api.mendeley.com/oauth/token'
    , p_http_method => 'POST'
    , p_parm_name   => apex_util.string_to_table('grant_type:client_id:client_secret:scope')
    , p_parm_value  => apex_util.string_to_table(v_grant_type||':'||v_client_id||':'||v_client_secret||':'
||v_scope)
    , p_wallet_path => 'file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle'
    , p_wallet_pwd  => '<put_your_oracle_wallet_password_here>'
    );
-- debug
-- dbms_output.put_line(token);
jtoken := json(token);
v_token := json_ext.get_string(jtoken,'access_token');
-- debug
-- dbms_output.put_line(v_token);
return v_token;
EXCEPTION
WHEN OTHERS THEN
   raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
end;​


Here is the anonymous procedure which I put into a PL/SQL region on the Oracle Apex page:

This procedure incorporates the function above and makes the request and handles the response from the Mendeley REST API

Note how the procedure calls the function MENDELEY_CALL (above) to load the variable v_token. 

DECLARE
  v_token  VARCHAR2(599) := mendeley_call(put_your_mendeley_client_id_here);
  v_search VARCHAR2(500);
  mendeley_document NCLOB;
  v_status VARCHAR2(100);
  obj json_list;
  v_id VARCHAR2(100);
  v_title NVARCHAR2(1000);
  v_abstract NCLOB;--varchar2(32000);
  v_link     VARCHAR2(1000);
  v_source   VARCHAR2(500);
  v_type     VARCHAR2(100);
  v_pct_hit  VARCHAR2(10);
  v_rows     NUMBER(10);
  v_batch_id NUMBER(10);
BEGIN
  -- Oracle Wallet
  utl_http.set_wallet('file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle', 
'my_secret_password');
  -- Set Authorisation headers and utf8
  -- the following lilne is necessary if you need to use languages other than latin and 
  -- you will use APEX_WEB_SERVICE package 
  utl_http.set_body_charset('UTF-8');
  -- build the Authorisation header
  apex_web_service.g_request_headers(1).name  := 'Content-Type';
  apex_web_service.g_request_headers(1).value := 'application/jsonrequest';
  apex_web_service.g_request_headers(1).name  := 'Authorization';
  apex_web_service.g_request_headers(1).value := 'Bearer '||v_token||'';
  
  -- Make the request and load the response into a CLOB 
  mendeley_document := apex_web_service.make_rest_request 
      ( 
        p_url => 'https://api.mendeley.com:443/search/catalog' 
      , p_http_method => 'GET' 
      , p_parm_name => apex_util.string_to_table('title:limit') 
      , p_parm_value => apex_util.string_to_table('Mendeley:10') 
      );
  -- Load the response to JSON_LIST PL/JSON object
  obj := json_list(mendeley_document);
  -- Start extracting values from the JSON and writhe some HTML
  -- Traverse over JSON_LIST extract elements you like
  FOR i IN 1..obj.count
  LOOP
    v_id       := json_ext.get_string(json(obj.get(i)),'id');
    v_title    := json_ext.get_string(json(obj.get(i)),'title');
    v_abstract := json_ext.get_string(json(obj.get(i)),'abstract');
    v_link     := json_ext.get_string(json(obj.get(i)),'link');
    v_source   := json_ext.get_string(json(obj.get(i)),'source');
    v_type     := json_ext.get_string(json(obj.get(i)),'type');
    -- write extracted data
   dbms_output.put_line(v_title||' ==> '||v_abstract);
   END LOOP;
 END;​
 END;

This shows how easy is, in this case using one function and one procedure to make a REST API request to an external Web Service from Oracle Apex. 
Categories: DBA Blogs

Parallel Execution -- 2 PX Servers

Hemant K Chitale - Tue, 2015-03-03 09:51
I've posted a couple of examples (here and here) of Parallel Execution servers for Parallel Query.

How do we identify usage of Parallel Execution ?

I will update this post (and, possibly, subsequent post(s)) with a few methods.

The first one (as I've shown in my previous posts) is to look at the column PX_SERVERS_EXECUTIONS in either V$SQLSTATS or V$SQL.  This can identify the number of PX Servers used for an SQL (Query or DML).  However, there is a caveat when the SQL undergoes multiple execution -- the statistic on PX_SERVERS_EXECUTIONS may be cumulative (i.e. additive) across all the executions of the SQL.  UPDATE 13-Mar-15 : See the new post here.

Another method is to look at the V$PX_PROCESS and V$PX_SESSION views.

Let me demonstrate this second method using the same SQL query from my previous blog post.

[oracle@localhost ~]$ sqlplus hemant/hemant

SQL*Plus: Release 11.2.0.2.0 Production on Tue Mar 3 23:34:37 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>select distinct sid from v$mystat;

SID
----------
197

HEMANT>select count(*) from v$px_process;

COUNT(*)
----------
0

HEMANT>select count(*) from v$px_session;

COUNT(*)
----------
0

HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 4
resource_manager_cpu_allocation integer 4
HEMANT>set serveroutput off
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select count(*) from v$px_process;

COUNT(*)
----------
16

SYS>select qcsid, req_degree, degree, count(*)
2 from v$px_session
3 group by qcsid, req_degree, degree
4 /

QCSID REQ_DEGREE DEGREE COUNT(*)
---------- ---------- ---------- ----------
197 1
197 16 16 16

SYS>

The query by the SYS user is from a different session while the "select /*+ PARALLEL */ count(*) from Large_Table;" is being executed by HEMANT.  This query is on V$PX_SESSION and shows only when the Parallel Query sessions are active -- i.e. running HEMANT's  parallel count(*) query.  (If I query V$PX_SESSION after the parallel count(*) completes, I won't get the information).

The above output demonstrates
(a) that there were no PX servers before I began the parallel count(*) query and there were 16 at the end -- 16 PX servers had been started and had not yet shutdown by the time I queried V$PX_PROCESS (They will shutdown after a while  ** note below).
(b) that my parallel count(*) query (executed by SID 197 which is the QueryCo-ordinator -- represented by QCSID) DID request and use 16 PX server sessions (as evidenced in the output from the query on V$PX_SESSION).  Thus, what I claimed on the basis of PX_SERVERS_EXECUTION in my previous post is correct.

** Note : A few minutes later, I can see that the PX Servers have shutdown.

HEMANT>select count(*) from v$px_process
2 /

COUNT(*)
----------
0

HEMANT>


Later, I will demonstrate how to join V$PX_PROCESS and V$PX_SESSION.
I will also demonstrate how you manage the number of PX Servers.

.
.
.

Categories: DBA Blogs

Different plan_hash_value same plan

Bobby Durrett's DBA Blog - Mon, 2015-03-02 15:38

I mentioned this same effect in an earlier post about SQL profiles: link

I get a different plan_hash_value values for a query each time I run an explain plan or run the query.  I see this in queries whose plan includes a system generated temporary segment like this:

|   1 |  TEMP TABLE TRANSFORMATION   |                             |
...
|  72 |    TABLE ACCESS STORAGE FULL | SYS_TEMP_0FD9D668C_764DD84C |

For some reason the system generated temporary table name gets included in the plan_hash_value calculation.  This makes plan_hash_value a less than perfect way to compare two plans to see if they are the same.

Last week I was using my testselect package to test the effect of applying a patch to fix bug 20061582.  I used testselect to grab 1160 select statements from production and got their plans with and without the patch applied on a development database.  I didn’t expect many if any plans to change based on what the patch does.  Surprisingly, 115 out of the 1160 select statements had a changed plan, but all the ones I looked at had the system generated temporary table names in their plan.

Now, I am going to take the queries that have different plans with and without the patch and execute them both ways.  I have a feeling that the plan differences are mainly due to system generated temp table names and their execution times will be the same with and without the patch.

I’ve run across other limitations of plan hash value as I mentioned in an earlier post: link

I’m still using plan_hash_value to compare plans but I have a list of things in my head that reminds me of cases where plan_hash_value fails to accurately compare two plans.

– Bobby

P.S. After posting this I realized that I didn’t know how many of the 115 select statements with plans that differed with and without the patch had system generated temp tables.  Now I know.  114 of the 115 have the string “TEMP TABLE TRANSFORMATION” in their plans.  So, really, there is only one select statement for which the patch may have actually changed its plan.

P.P.S. I reapplied the patch and verified that the one sql_id didn’t really change plans with the patch.  So, that means all the plan changes were due to the system generated name.  Also, all the executions times were the same except for one query that took 50 seconds to parse without the patch and 0 with the patch.  So, one of the queries with the system generated temp table name happened to benefit from the patch.  Very cool!

P.P.P.S This was all done on an 11.2.0.4 Exadata system.

Categories: DBA Blogs

Parallel Execution -- 1b The PARALLEL Hint and AutoDoP (contd)

Hemant K Chitale - Mon, 2015-03-02 09:38
Continuing the previous thread, having restarted the database again, with the same CPU_COUNT and missing I/O Calibration statistics  ....

The question this time is : What if the table level DoP is specifically 1 ?

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Mon Mar 2 23:22:28 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 135
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SYS>
SYS>select * from dba_rsrc_io_calibrate;

no rows selected

SYS>
SYS>connect hemant/hemant
Connected.
HEMANT>set serveroutput off
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>alter table large_table parallel 1;

Table altered.

HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2622 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
8

HEMANT>

Aaha ! Again ! The same SQL statement, the same SQL_ID, the same Execution Plan (Plan Hash Value) and Oracle chose to use 8 PX Servers for the query !  Again, ignoring the table level DoP (of 1)

So, once again, we see that Oracle actually computes a DoP that looks like it is CPU_COUNT x PARALLEL_THREADS_PER_CPU. Let's verify this.

HEMANT>connect / as sysdba
Connected.
SYS>alter system set parallel_threads_per_cpu=4;

System altered.

SYS>alter system flush shared_pool;

System altered.

SYS>connect hemant/hemant
Connected.
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>set serveroutput off
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8b0ybuspqu0mm, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from Large_Table

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1311 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='8b0ybuspqu0mm';

PX_SERVERS_EXECUTIONS
---------------------
16

HEMANT>

YES SIR ! Oracle chose to use 16 PX Servers this time. So that does look like CPU_COUNT x PARALLEL_THREADS_PER_CPU.  Have you also noticed the COST ?  The COST has also dropped to half.  So, the COST is also computed based on the number of PX Servers that it expects to be able to grab and use.

.
.
.


Categories: DBA Blogs

Webcast - Oracle Database 12c High Availability New Features

Organizations today are dependent on IT to run efficient operations, quickly analyze information and compete more effectively. Consequently, it is essential that their IT infrastructure and databases...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Even More Oracle Database Health Checks with ORAchk 12.1.0.2.1 and 12.1.0.2.3 (Beta)

As we have discussed before, it can be a challenge to quantify how well your database is meeting operational expectations and identify areas to improve performance. Database health checks are...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #412, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-02-27 10:58

This Log Buffer Edition makes it way through the realms of Oracle, SQL Server and MySQL and brings you some of the blog posts.

Oracle:

Introducing Oracle Big Data Discovery Part 3: Data Exploration and Visualization

FULL and NO_INDEX Hints

Base64 Encode / Decode with Python (or WebLogic Scripting Tool) by Frank Munz

Why I’m Excited About Oracle Integration Cloud Service – New Video

Reminder: Upgrade Database 12.1.0.1 to 12.1.0.2 by July 2015

SQL Server:

An article about how we underestimate the power of joins and degrade our query performance by not using proper joins

Most large organizations have implemented one or more big data applications. As more data accumulates internal users and analysts execute more reports and forecasts, which leads to additional queries and analysis, and more reporting.

How do you develop and deploy your database?

A database must be able to maintain and enforce the business rules and relationships in data in order to maintain the data model.

Error handling with try-catch-finally in PowerShell for SQL Server

MySQL:

MySQL Enterprise Monitor 3.0.20 has been released

MySQL Cluster 7.4 is GA!

Connector/Python 2.1.1 Alpha released with C Extension

Worrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message?

MySQL Cluster 7.4 GA: 200 Million QPS, Active-Active Geographic Replication and more

Categories: DBA Blogs

Parallel Execution -- 1 The PARALLEL Hint and AutoDoP

Hemant K Chitale - Thu, 2015-02-26 09:29
The behaviour of the PARALLEL Hint has changed subtly but significantly in 11.2.  From the documentation :
Beginning with Oracle Database 11g Release 2 (11.2.0.1), the PARALLEL and NO_PARALLEL hints are statement-level hints and supersede the earlier object-level hints .......... If you omitinteger, then the database computes the degree of parallelism.

Further down, the documentation states :
This hint overrides the value of the PARALLEL_DEGREE_POLICY initialization parameter. 
and 
PARALLEL: The statement always is run parallel, and the database computes the degree of parallelism, which can be 2 or greater.

It is important to note that the Statement Level PARALLEL Hint *overrides* the PARALLEL_DEGREE_POLICY.
So, even if PARALLEL_DEGREE_POLICY is set to MANUAL, implying that automatic degree of parallelism is disabled, the PARALLEL Hint, itself, allows Oracle to auto-compute a DoP.

What further complicates understanding of the behaviour is a reading of Oracle Support Note 1269321.1 that implies that if I/O calibration statistics are missing, Oracle does not use the automatic degree of parallelism feature.
So, one would assume that if I/O Calibration is not done, with the "automatic degree of parallelism" feature not being used, the PARALLEL Hint would not compute any Auto DoP !

Let's run a simple test case :

HEMANT>select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

HEMANT>
HEMANT>show parameter parallel;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 40
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 16
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
HEMANT>
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 10488 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
2

HEMANT>
HEMANT>select * from dba_rsrc_io_calibrate;

no rows selected

HEMANT>

Apparently, I/O Calibration statistics are not present ("are missing"). And yet, Oracle chose to use 2 PX Servers (not 4, not 1) for the query.  Isn't this confusing ?  Is AutoDoP used or is it not used ?

Let's make a change somewhere (else ?)

HEMANT>show parameter cpu_count

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1

HEMANT>

Let's try on a server with more CPUs.  I reconfigure the same VM to run with 4 "CPUs" (cores) and restart the VM and database instance.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Thu Feb 26 23:18:47 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 135
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SYS>

We can now see that CPU_COUNT, PARALLEL_MAX_SERVERS and PARALLEL_SERVERS_TARGET have all gone up.

SYS>connect hemant/hemant
Connected.
HEMANT>set serveroutput off
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2622 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>
HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
8

HEMANT>

Aaha !  The same SQL statement, the same SQL_ID, the same Execution Plan (Plan Hash Value) and Oracle  chose to use 8  (not 1, not 4, not 2) PX servers for the query !

So, there IS some way that AutoDoP is being used even when I/O calibration statistics are missing. Is this AutoDoP simply a function CPU_COUNT x PARALLEL_THREADS_PER_CPU ?

UPDATE :  Herlindo QV also confirmed to me on Twitter : "it seems that way and in RAC the formula looks like (CPU_COUNT x PARALLEL_THREADS_PER_CPU) x RAC nodes"  and, later, when I ask him to check with PARALLEL_FORCE_LOCAL=TRUE : "right, restricts parallel to just one instance. The tricky part is how to choose the best DOP for each unique scenario"
.
.
.

Categories: DBA Blogs

Virtual CPUs with Google Compute Engine

Pythian Group - Wed, 2015-02-25 12:12

Continuing on my series of virtual CPU behavior in Amazon Web Services, Amazon Web Service HVM instances, and Microsoft Azure, I’m taking a look at Google Compute Engine (GCE). GCE is a relative newcomer to the public cloud world, become generally available in December 2013. It does have some interesting features, including transparent maintenance through live migration, and automatic sustained-use discounts without upfront commitments.

Unlike Amazon or Microsoft, Google is very upfront about their vCPU definition.

For the n1 series of machine types, a virtual CPU is implemented as a single hyperthread on a 2.6GHz Intel Sandy Bridge Xeon or Intel Ivy Bridge Xeon (or newer) processor. This means that the n1-standard-2 machine type will see a whole physical core.

I still believe calling such a hyperthread a “virtual CPU” is misleading. When creating a virtual machine in a non-cloud VM platform, 1 virtual CPU = 1 physical core. Plain and simple. But when using a cloud platform, I need 2 virtual CPUs to get that same physical core.

cpu-comparison

Anyways, off to run some CPU tests. n1-standard-4 is a close match to the m3.xlarge instances previously tested, so I’ll try that.

Getting set up on Google Compute Engine

I already signed up with Google Compute Engine’s free trial and created a project I’m calling marc-cpu-test. Installing the gcloud compute command-line tools.

[marc@quartz ~]$ gcloud auth login --no-launch-browser
Go to the following link in your browser:

https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%(redacted)&access_type=offline

Enter verification code: (redacted)
Saved Application Default Credentials.

You are now logged in as [fielding@pythian.com].
Your current project is [None].  You can change this setting by running:
  $ gcloud config set project PROJECT
[marc@quartz ~]$ gcloud config set project marc-cputest
[marc@quartz ~]$ gcloud config set compute/zone us-central1-a
[marc@quartz ~]$ gcloud compute instances create cpu-test-n4 --image centos-6 --machine-type "n1-standard-4" --zone us-central1-a
Created [https://www.googleapis.com/compute/v1/projects/marc-cputest/zones/us-central1-a/instances/cpu-test-n4].
NAME        ZONE          MACHINE_TYPE  INTERNAL_IP    EXTERNAL_IP   STATUS
cpu-test-n4 us-central1-a n1-standard-4 10.240.222.194 104.154.75.96 RUNNING
[marc@quartz ~]$ gcloud compute ssh cpu-test-n4
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/marc/.ssh/google_compute_engine.
Your public key has been saved in /home/marc/.ssh/google_compute_engine.pub.
The key fingerprint is:
(redacted)
Updated [https://www.googleapis.com/compute/v1/projects/marc-cputest].
Warning: Permanently added '104.154.75.96' (RSA) to the list of known hosts.
Warning: Permanently added '104.154.75.96' (RSA) to the list of known hosts.

OK, instance all set and connected. As a CentOS 6 image it doesn’t allow SSH root logins by default, so attempting to set up a gcloud environment as a root user will get you “permission denied” errors on SSH. Serves me right for trying to run these tools as root in the first place :-).

Looking around

Checking what they got us:

[marc@cpu-test-n4 ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 1
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 2
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2
processor       : 3
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2

Google has redacted the exact CPU model numbers, but has clearly marked this as a 2-core system with core IDs 0 and 1.

The single-CPU case
[marc@cpu-test-n4 ~]$ taskset -pc 0 $$
pid 1558's current affinity list: 0-3
pid 1558's new affinity list: 0
[marc@cpu-test-n4 ~]$  dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 14.3427 s, 151 MB/s
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 29.3081 s, 74.1 MB/s
2170552320 bytes (2.2 GB) copied, 29.3065 s, 74.1 MB/s

We get a nice boost in raw CPU numbers as compared to the 120 MB/s I saw in AWS. With two processes sharing this CPU, see a tiny bit less than half the throughput.

Sharing the cores (or trying to)
[marc@cpu-test-n4 ~]$ taskset -pc 0,1 $$
pid 1558's current affinity list: 0
pid 1558's new affinity list: 0,1
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c &gt; /dev/null &amp; done
[1] 1803
[2] 1805
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6959 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.7139 s, 148 MB/s

This is interesting; we see almost full-speed throughput in spite of processors 0 and 1 said to have a shared core. With processors 0 and 2 the situation is the same.

[marc@cpu-test-n4 ~]$ taskset -pc 0,2 $$
pid 1558's current affinity list: 0,1
pid 1558's new affinity list: 0,2
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c &gt; /dev/null &amp; done
[1] 1830
[2] 1833
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6683 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.6692 s, 148 MB/s

Is the CPU scheduler ignoring my taskset commands? Running mpstat 2-second samples during the test to see actual CPU usage:

[marc@cpu-test-n4 ~]$ mpstat -P ALL 2
...
06:08:44 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:46 PM  all   46.31    0.00    3.75    0.00    0.00    0.00    0.00    0.00   49.94
06:08:46 PM    0   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    1   92.00    0.00    8.00    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:46 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
...
06:08:52 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:54 PM  all   46.75    0.00    3.25    0.00    0.00    0.00    0.00    0.00   50.00
06:08:54 PM    0   93.47    0.00    6.53    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:54 PM    2   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

So the taskset commands are working: when we ask for CPUs 0 and 1, we are getting them, but throughput shows that cores aren’t being shared. It means that the CPUs in the virtual machine are not statically bound to hardware threads as seen under AWS. I’d call it a win as it gets more consistent performance even if the guest operating system is forced to makes poor CPU scheduling decisions as in this case.

[marc@cpu-test-n4 ~]$ taskset -pc 0-3 $$
pid 1558's current affinity list: 0,2
pid 1558's new affinity list: 0-3
[marc@cpu-test-n4 ~]$ for i in {1..4}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 22.9823 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9914 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9915 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 23.1333 s, 93.8 MB/s

This is more the throughput we would expect with two shared cores.

Lessons learned

Over the course of these tests, I’ve discovered a few things:

  • Although they share virtual CPUs like competitors, Google is very upfront about this behavior.
  • Actual throughput for a simple gzip workload is excellent.
  • Google Compute Engine has an abstraction layer in front of CPUs that dynamically schedules tasks between CPU threads, in addition to the regular scheduler in the virtual machine. In my testing, it allocates tasks efficiently across CPU cores, even when the OS scheduler is configured suboptimally.
Categories: DBA Blogs

Exadata X5 – A Practical Point of View of the New Hardware and Licensing

Pythian Group - Wed, 2015-02-25 12:10

Oracle recently announced its latest iteration of Exadata – X5-2. It includes a refresh of the hardware to the most recent Xeon® E5-2699 v3 CPUs. These new CPUs boost the total cores count in a full rack to 288. This is higher than the current 8 socket “big machine” version X4-8, which has only 240 cores.

But the most exciting part is the all flash version of Exadata. In the previous generation – X4 – Oracle had to switch from 15K drives to 10K drives in order to boost capacity from 600 GB to 1200 GB per hard drive to keep disk space higher than flash cache size. At that time of X4 announcements, we were already wondering why Oracle was still offering high-speed disks and not switching to all flash, and now we know why. Because that type of high-performance flash wasn’t quite ready.

Maintaining high IO rates over long periods of times needed some changes to the ILOM in order to maintain cooling fans speed based on many individual temperature sensors inside the flash cards (details). Removing the SAS controller and using the new NVMe connectivity resulted in much higher bandwidth per hard drive – 3.2 GBytes/sec vs. the old 1.2 GBytes/sec SAS.

With temperature and bandwidth sorted out, we now have a super-high performance option (EF – Extreme Flash) for Exadata which delivers the stunning 263 GB/sec uncompressed scan speed in a full rack. The difference in performance between the High Capacity and High Performance EF flash option is now much higher. The high-performance option in Exadata X5 is now viable. In Exadata X4 it made so little difference, that it was pointless.

x4 vs x5

The one thing I wonder with the X5 announcement is why the X5-2 storage server still uses the very old and quite outdated 8 core CPUs. I’ve seen many cases where a Smart Scan on an HCC table is CPU bound on the storage server even when reading from spinning disk. I am going to guess that there’s some old CPU inventory to cleanup. But that may not end up being such a problem (see “all columnar” flash cache feature).

But above all, the most important change was the incremental licensing option. With 36 cores per server, even the 1/8th rack configuration was in the multi-million dollars in licenses, and in many cases was too much for the problem in hand.

The new smallest configuration is:

  • 1/8th rack, with 2 compute nodes
  • 8 cores enabled per compute node (16 total)
  • 256 GB RAM per node (upgradable to 768 GB per node)
  • 3 storage servers with only half the cores, disks and flash enabled

Then you can license additional cores as you need them, 2 cores at a time. Similar to how ODA licensing option worked. You cannot reduce licensed cores.

The licensing rules changes go even further. Now you can mix & match compute and storage servers to create even more extreme options. Some non-standard examples:

  • Extreme Memory – more compute nodes with max RAM, reduced licensed cores
  • Extreme Storage – replace compute node with storage nodes, reduced licensed cores

x5 custom
Link to video

In conclusion, Oracle Exadata X5 configuration options and the changes it brings to licensing allows an architect to craft a system that will meet any need and allow for easy, small step increments in the future, potentially without any hardware changes.

There are many more exciting changes in Oracle 12c, Exadata X5 and the new storage server software which I may cover in the future as I explore them in detail.

Categories: DBA Blogs

Log Buffer #411, A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2015-02-25 12:00

This Log Buffer Edition brings you some blog posts from Oracle, SQL Server and MySQL.

Oracle:

Suppose you have a global zone with multiple zpools that you would like to convert into a native zone.

The digital revolution is creating abundance in almost every industry—turning spare bedrooms into hotel rooms, low-occupancy commuter vehicles into taxi services, and free time into freelance time

Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.

Calling All WebLogic Users: Please Help Us Improve WebLogic Documentation!

Top Two Cloud Security Concerns: Data Breaches and Data Loss

SQL Server:

This article describes a way to identify the user who truncated the table & how you can recover the data.

When SQL Server 2014 was released, it included Hekaton, Microsoft’s much talked about memory-optimized engine that brings In-Memory OLTP into play.

Learn how you can easily spread your backup across multiple files.

Daniel Calbimonte has written a code comparison for MariaDB vs. SQL Server as it pertains to how to comment, how to create functions and procedures with parameters, how to store query results in a text file, how to show the top n rows in a query, how to use loops, and more.

The article show a simple way we managed to schedule index rebuild and reorg for an SQL instance with 106 databases used by one application using a Scheduled job.

MySQL:

How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!)

vCloud Air and business-critical MySQL

MySQL Dumping and Reloading the InnoDB Buffer Pool

How to benchmark MongoDB

MySQL Server on SUSE 12

Categories: DBA Blogs

SQL Server 2014 Cumulative Update 6

Pythian Group - Wed, 2015-02-25 11:59

Hello everyone,

Just a quick note to let you know that this week, while most of North America was enjoying a break, Microsoft released the 6th cumulative update for SQL Server 2014. This update contains fixes for 64 different issues, distributed as follows:

SQL 2014 Cumulative Update 6

As the name implies, this is a cumulative update, that means it is not necessary to install the previous 5 in case you don’t have them. Please remember to test thoroughly any update before applying to production.

The cumulative update and the full release notes can be found here: https://support.microsoft.com/kb/3031047/en-us?wa=wsignin1.0

 

 

Categories: DBA Blogs

Partner Webcast – Oracle Business Process Management 12c : The Game Changer for your Business

The Oracle Business Process Management Suite 12c (BPM) is of one the most complete BPM suites in the market and the most feature rich BPM suite offerings. There have been a wide variety of changes...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Speaking at Delphix User Group Webex March 11

Bobby Durrett's DBA Blog - Tue, 2015-02-24 17:42

On March 11 at 10 am California time I will be speaking in a Delphix User Group Webex session.

Here is the sign up url: WebEx sign up.

Adam Leventhal, the Delphix CTO, will also be on the call previewing the new Delphix 4.2 features.

I will describe our experience with Delphix and the lessons we have learned.  It is a technical talk so it should have enough details to have value to a technical audience.  Hopefully I have put enough effort into the talk to make it useful to other people who have or are considering getting Delphix.

There will time for questions and answers in addition to our talks.

I really enjoy doing user group presentations.  I had submitted an abstract for this talk to the Collaborate 2015 Oracle user group conference but it was not accepted so I wont get a chance to give it there.  But, this WebEx event gives me a chance to present the same material, so I’m happy to have this opportunity.

If you have an interest in hearing about Delphix join the call.  It is free and there will be some good technical content.

– Bobby

P.S. If this talk interests you I also have some earlier blog posts that relate to some of the material I will be covering:

Delphix first month

Delphix direct i/o and direct path reads

Delphix table recovery

Also, I plan to post the slides after the talk.

Categories: DBA Blogs

Oracle REST data services on Oracle Database Cloud Application Express

For those familiar with Oracle Application Express (aka APEX) - Oracle’s web-based application development tool, you probably now, that Oracle Application Express Listener is now known...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Final version of RMOUG 2015 presentation and script files are here ...

Grumpy old DBA - Sat, 2015-02-21 16:46
Wow I had a great time presenting at RMOUG 2015.  The room was absolutely packed and I felt presentation went pretty well ( so let's see what the evaluation numbers show ).

Here is a link (Final version presentation and supporting files )  to zip file containing the final version of the presentation as well the script and sql file shown at the end of the presentation.  Please let me know if anyone has any problems using this and sorry I thought I would get this updated and available Thursday.

Thanks so much to RMOUG!

John
Categories: DBA Blogs

Data Guard Logical Standby – what does it mean?

The Oracle Instructor - Fri, 2015-02-20 11:34

With Data Guard, you have the choice between Physical and Logical Standby databases. Let’s see the differences! My demo starts with a Physical Standby, that is then converted into a Logical Standby (therefore the name of the database):

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
    prima - Primary database
    logst - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

For now, logst is still a Physical Standby. It is called that way, because the datafiles of prima and logst are physically identical. I can even restore them from one side to the other:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 11:43:07 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > alter database datafile 4 offline;

Database altered.

Now I copy the datafile from the standby server uhesse2 to the primary server uhesse1 – there are different ways to do that, but scp is one:

SYS@logst > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/logst/users01.dbf

SYS@logst > exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
[oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
The authenticity of host 'uhesse1 (192.168.56.10)' can't be established.
RSA key fingerprint is e9:e7:5b:8b:b2:33:42:26:89:03:54:0c:16:0d:98:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'uhesse1,192.168.56.10' (RSA) to the list of known hosts.
oracle@uhesse1's password: 
users01.dbf                                                                                               100% 5128KB   5.0MB/s   00:00    
[oracle@uhesse2 ~]$ 

When I try to online the datafile again on prima, it is like if I would have restored it from backup:

SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > recover datafile 4;
Media recovery complete.
SYS@prima > alter database datafile 4 online;

Database altered.

The datafiles and also the archived logfiles are physically identical on both sites here, only the controlfiles are different. v$database (like v$datafile, by the way) derives its content from the controlfile:

SYS@prima > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
PRIMA						   2012613220 PRIMARY

SYS@prima > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
PRIMA						   2012613220 PHYSICAL STANDBY

Now I will convert it into Logical Standby:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:29:16 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > exec dbms_logstdby.build

PL/SQL procedure successfully completed.

SYS@prima > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > alter database recover to logical standby logst;

Database altered.

SYS@logst > shutdown immediate
ORA-01507: database not mounted


ORACLE instance shut down.
SYS@logst > startup mount
ORACLE instance started.

Total System Global Area  521936896 bytes
Fixed Size		    2214936 bytes
Variable Size		  314573800 bytes
Database Buffers	  201326592 bytes
Redo Buffers		    3821568 bytes
Database mounted.
SYS@logst > alter database open resetlogs;

Database altered.

SYS@logst > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
LOGST						   3156487356 LOGICAL STANDBY

SYS@logst > exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> remove database logst;
Removed database "logst" from the configuration
DGMGRL> add database logst as connect identifier is logst;
Database "logst" added
DGMGRL> enable database logst;
Enabled.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
    prima - Primary database
    logst - Logical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

One significant change is that the DBID and the name is now different from the primary database as you see above. And the datafiles are no longer physically identical:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:38:56 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > alter database datafile 4 offline;

Database altered.

SYS@prima > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.dbf /u01/app/oracle/oradata/prima/users01.old

I copy the original file because I know that the restore from logst will not work. It is just to show my point:

[oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
oracle@uhesse1's password: 
users01.dbf                                                                                   100% 5128KB   5.0MB/s   00:00    
SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01122: database file 4 failed verification check
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'
ORA-01206: file is not part of this database - wrong database id

Exactly. logst is now an autonomous database that is just incidentally doing (nearly) the same DML as prima does. It is no longer Oracle-Block-wise the same as prima. The rowids from prima have no meaning on logst any more:

DGMGRL> edit database logst set state=apply-on;
Succeeded.
SYS@prima > insert into scott.dept values (50,'TEST','TEST');
insert into scott.dept values (50,'TEST','TEST')
                  *
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.old /u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > recover datafile 4;
Media recovery complete.
SYS@prima > alter database datafile 4 online;

Database altered.

SYS@prima > insert into scott.dept values (50,'TEST','TEST');

1 row created.

SYS@prima > commit;
Commit complete.

SYS@prima > select rowid,dept.* from scott.dept where deptno=50;

ROWID		       DEPTNO DNAME	     LOC
------------------ ---------- -------------- -------------
AAADS8AAEAAAACNAAE	   50 TEST	     TEST

This rowid is what we normally record in the redo entries and it would be sufficient to retrieve that row on the primary and also on a physical standby where we do “Redo Apply” (another term for “recover database”). But that rowid is different on logst:

SYS@logst > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > select rowid,dept.* from scott.dept where deptno=50;

ROWID DEPTNO DNAME LOC
------------------ ---------- -------------- -------------
AAADS8AAEAAAACOAAA 50 TEST TEST

That is why we need to put additional information – supplemental log data – into the redo entries on the primary. It will help the SQL Apply mechanism to retrieve the row there:

Logical Standby Architecture

Logical Standby Architecture

The supplemental log data contains at least additionally the primary/unique key like on the picture. In the absence of primary/unique keys, every column of a modified row is written into the redo logs. That may impact the performance of the primary database. Another serious drawback of Logical Standby is that not every datatype and not every operation on the primary is supported for the SQL Apply mechanism. The number of unsupported datatypes decreases version by version, though.

The demo and the sketch above are from my presentation about Transient Logical Standby at the Oracle University Expert Summit 2015 in Dubai – really an amazing location! Hope you find it useful :-)


Tagged: Data Guard
Categories: DBA Blogs

Database Flashback -- 5

Hemant K Chitale - Fri, 2015-02-20 10:15
Continuing my series on Database Flashback.

Here I demonstrate that Flashback Logs alone are not sufficient.  The process of FLASHBACK DATABASE does need *some* redo entries from Archive/Online Redo Logs.



[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 20 23:18:37 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
20-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 78561280 415383552

SYS>

My database seems to have adequate Flashback Capability in the Flashback Logs.  Let me run some transactions and generate Redo and Flashback.

SYS>connect hemant/hemant
Connected.
HEMANT>drop table obj_list;
drop table obj_list
*
ERROR at line 1:
ORA-00942: table or view does not exist


HEMANT>create table obj_list tablespace users
2 as select * from dba_objects where 1=2;

Table created.

HEMANT>select tablespace_name
2 from user_tables
3 where table_name = 'OBJ_LIST';

TABLESPACE_NAME
------------------------------
USERS

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects;

225138 rows created.

HEMANT>rollback;

Rollback complete.

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects;

225135 rows created.

HEMANT>delete obj_list;

225135 rows deleted.

HEMANT>
HEMANT>select count(*)
2 from v$archived_log
3 where first_time >
4 (select startup_time
5 from v$instance)
6 /

COUNT(*)
----------
2

HEMANT>
HEMANT>alter system switch logfile;

System altered.

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 union all
8 select * from dba_objects;

300180 rows created.

HEMANT>insert into obj_list
2 select * from obj_list;

300180 rows created.

HEMANT>select count(*) from obj_list;

COUNT(*)
----------
600360

HEMANT>commit;

Commit complete.

HEMANT>select count(*) from v$archived_log
2 where first_time >
3 (select startup_time from v$instance)
4 /

COUNT(*)
----------
4

HEMANT>

Now, let's suppose that a scheduled (periodic) archive log backup job kicks in and creates a backup of archivelogs and deletes them.

HEMANT>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:30:36 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> backup as compressed backupset archivelog all delete input;

Starting backup at 20-FEB-15
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=58 device type=DISK
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=106 STAMP=872033779
channel ORA_DISK_1: starting piece 1 at 20-FEB-15
channel ORA_DISK_1: finished piece 1 at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnshxs_.bkp tag=TAG20150220T233055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_18/o1_mf_1_3_bg9dcl3v_.arc RECID=106 STAMP=872033779
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=112 STAMP=872205844
input archived log thread=1 sequence=4 RECID=113 STAMP=872205859
input archived log thread=1 sequence=5 RECID=114 STAMP=872205867
input archived log thread=1 sequence=6 RECID=115 STAMP=872206048
input archived log thread=1 sequence=7 RECID=116 STAMP=872206098
input archived log thread=1 sequence=8 RECID=117 STAMP=872206254
channel ORA_DISK_1: starting piece 1 at 20-FEB-15
channel ORA_DISK_1: finished piece 1 at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_3_bggndmz4_.arc RECID=112 STAMP=872205844
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_4_bggnf0qy_.arc RECID=113 STAMP=872205859
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_5_bggnf8ty_.arc RECID=114 STAMP=872205867
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_6_bggnlzvr_.arc RECID=115 STAMP=872206048
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_7_bggnnk6c_.arc RECID=116 STAMP=872206098
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_8_bggnsg5k_.arc RECID=117 STAMP=872206254
Finished backup at 20-FEB-15

Starting Control File and SPFILE Autobackup at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/autobackup/2015_02_20/o1_mf_s_872206274_bggnt34k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 20-FEB-15

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$

Now, suppose that I need to Flashback the database.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 20 23:32:36 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14577571 19-FEB-15 1440 135348224 3784998912

SYS>select to_char(oldest_flashback_time,'DD-MON HH24:MI:SS')
2 from V$flashback_database_log;

TO_CHAR(OLDEST_FLASHBACK
------------------------
19-FEB 00:04:02

SYS>

Notice how the OLDEST_FLASHBACK_TIME has changed from 17-Feb to the midnight of 18/19-Feb ! Apparently, my FRA cannot hold very many Flashback Logs.
As I have mentioned in two posts earlier, here and here, the scope of the actual ability to Flashback may vary.

Can I flashback to SCN 14577572 ?  Let me give it a try.

SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>flashback database to SCN 14577572;
flashback database to SCN 14577572
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577571 to SCN 14577572
ORA-38761: redo log sequence 1 in thread 1, incarnation 5 could not be accessed


SYS>

Aaha ! Apparently, Oracle needs to read some redo from Archive Log(s) !  So, having Flashback Logs alone is *not* sufficient.  I know that I need the database to be running in ArchiveLog mode.  But I should also know that if I want to Flashback to a particular Time or SCN, I will need the corresponding ArchiveLog as well !  (Imaging trying to Flashback to 3 days ago and having purged all ArchiveLogs simply because I do daily Full Backups and have Retention of 2 days only !)

So, I must take the necessary action now.

SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:43:43 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 1 until sequence 2;

Starting restore at 20-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=1
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001619_bg9gpq87_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001619_bg9gpq87_.bkp tag=TAG20150219T001619
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=2
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001930_bg9gwl9w_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001930_bg9gwl9w_.bkp tag=TAG20150219T001930
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 20-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14577572
SYS>/

Flashback complete.

SYS>alter database open read only;

Database altered.

SYS>

What if I want to Flashback to another, later time ?

SYS>select sequence#, first_change#, to_char(first_time,'DD-MON HH24:MI:SS')
2 from v$archived_log
3 where first_time > trunc(sysdate)
4 order by 1;

SEQUENCE# FIRST_CHANGE# TO_CHAR(FIRST_TIME,'DD-M
---------- ------------- ------------------------
4 14580736 20-FEB 23:24:03
5 14581651 20-FEB 23:24:16
6 14583536 20-FEB 23:24:23
7 14584203 20-FEB 23:27:27
8 14584351 20-FEB 23:28:17

SYS>
SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>flashback database to SCN 14584205;
flashback database to SCN 14584205
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577573 to SCN 14584205
ORA-38761: redo log sequence 5 in thread 1, incarnation 5 could not be accessed


SYS>
SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:53:48 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 5 until sequence 6;

Starting restore at 20-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=5
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=6
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 20-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ cd /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20
sh-3.2$ ls -l
total 63344
-rw-rw---- 1 oracle oracle 494592 Feb 20 23:44 o1_mf_1_1_bggol5t8_.arc
-rw-rw---- 1 oracle oracle 18432 Feb 20 23:44 o1_mf_1_2_bggol6wm_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 20 23:54 o1_mf_1_5_bggp4zbp_.arc
-rw-rw---- 1 oracle oracle 13635072 Feb 20 23:54 o1_mf_1_6_bggp4zh3_.arc
sh-3.2$

I have Sequences 1 and 2 that were restored for the first flashback and Sequences 5 and 6 that have been restored now. Do I need Sequences 3 and 4 ? Do I need Sequence 7 (that contains the Redo beyond SCN 14584203) ?

Let's see.
sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14584205
SYS>/
flashback database to SCN 14584205
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577573 to SCN 14584205
ORA-38761: redo log sequence 3 in thread 1, incarnation 5 could not be accessed


SYS>
Apparently, I also need Sequences 3,at least,  and (maybe ?) 4 ! Why ? Because my database is currently at an SCN lower than the one I want to Flashback to and the corresponding redo is required. (If I had *not* done the first Flashback to the lower SCN, I wouldn't need these Archivelogs !)

SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Feb 21 00:01:34 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 3 until sequence 4;

Starting restore at 21-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=17 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=3
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=4
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 21-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ cd /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20
sh-3.2$ ls -l
total 63344
-rw-rw---- 1 oracle oracle 494592 Feb 20 23:44 o1_mf_1_1_bggol5t8_.arc
-rw-rw---- 1 oracle oracle 18432 Feb 20 23:44 o1_mf_1_2_bggol6wm_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 20 23:54 o1_mf_1_5_bggp4zbp_.arc
-rw-rw---- 1 oracle oracle 13635072 Feb 20 23:54 o1_mf_1_6_bggp4zh3_.arc
sh-3.2$ cd ../*21
sh-3.2$ ls -l
total 80272
-rw-rw---- 1 oracle oracle 31472640 Feb 21 00:01 o1_mf_1_3_bggpmf06_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 21 00:01 o1_mf_1_4_bggpmdxk_.arc
sh-3.2$

Here is something important (nothing to do with Flashback Database).  The server clock went past midnight into 21-Feb so the restored files went into 2015_02_21 and not 2015_02_20 !

sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14584205
SYS>/

Flashback complete.

SYS>

Remember my question about whether I would need Sequence 7 ?  Let's see what the alert.log shows about the Flashback Database actions.

Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_2_bggol6wm_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_21/o1_mf_1_3_bggpmf06_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_21/o1_mf_1_4_bggpmdxk_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_5_bggp4zbp_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_6_bggp4zh3_.arc
Recovery of Online Redo Log: Thread 1 Group 1 Seq 7 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo01.log
Incomplete Recovery applied until change 14584206 time 02/20/2015 23:28:03
Flashback Media Recovery Complete
Completed: flashback database to SCN 14584205

Another learning point !  Sequence 7 was applied from the Online Redo Log file.

What we have learned :
1.  If we Flashback the database to a particular SCN / Time / Restore Point, Oracle does need the Redo from the Archive / Online Redo Log file that was active at that time.  It still needs some Redo to make the database consistent (e.g. apply Undo)

2. If we Flashback the database to SCN 101 and then (without OPEN RESETLOGS), choose to Flashback to a subsequent SCN 201, we again need ArchiveLogs !

3. Flashback from the SQL command-line is intelligent enough to use the Online Redo Log but not (like, say, RMAN's RECOVER DATABASE), automatically restore ArchiveLogs as they are required !

.
.
,



Categories: DBA Blogs

Oracle MAF - assigning value to a backing bean property from javascript

Communicating between layers in MAF is quite easy with the provided APIs. For calling Java method for javascript we have the invokeMethod js function, which is quite straight forward: ...

We share our skills to maximize your revenue!
Categories: DBA Blogs