Feed aggregator

What is an LP? (EP vs LP)

VitalSoftTech - Thu, 2019-10-24 09:45

If you’re an aspiring musician or independent artist looking to promote your work in the music industry, you’ve probably wondered at one point or another; what is an LP? You might have heard of EPs as well, bringing up an even bigger question in your head; EP vs. LP? Which format for music promotion is […]

The post What is an LP? (EP vs LP) appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Autonomous Data Warehouse Speeds Up Decision Making at Telecom Fiji

Oracle Press Releases - Thu, 2019-10-24 09:00
Blog
Oracle Autonomous Data Warehouse Speeds Up Decision Making at Telecom Fiji

By Peter Schutt, Senior Director, Oracle—Oct 24, 2019

Telecom Fiji, one of the largest communications providers in the remote South Pacific Fiji archipelago, needed to accelerate its digital transformation to meet the increasing demand for connectivity from local residents, businesses and tourists.

To speed up decision making to enhance customer service, Telecom Fiji is leveraging AI-powered algorithms from Oracle’s cloud-based Autonomous Data Warehouse and Analytics to integrate and sift through increasing amounts of data from multiple sources to produce actionable insights in minutes.

In the past, the IT staff spent weeks in manually-intensive processes of aggregating and correlating the data which raised the risk of errors being introduced into the results. Reports were also often late. The database administrators wrote SQL queries to generate raw data and then feed those results back to an analytics team for correlation in spreadsheets.

The autonomous database provisions easily and quickly in minutes to improve time to market, self-patches to eliminate downtime for maintenance, and auto scales capacity on demand for flexibility to maximize performance and minimize costs. Telecom Fiji would have had to spend hundreds of thousands of dollars to get equally, or less powerful equipment on-premises in its data center to power the same kind of data warehouse. 

The front-end Oracle Analytics platform is used to create dashboards to visualize and detail sales and marketing performance, product performance, service usage trends, service-delivery performance, and other key indicators.

“We can ask our IT staff to come up with ideas or innovative solutions on how they can use the analytics. Then they will be performing more of an advisory role to management,” said Shalvin Narayan, Head of IT, Telecom Fiji. “It did surpass my expectations because the amount of functionality and performance it was providing, with respect to the amount of investment was marvelous.”

Watch the Telecom Fiji Video

In this video, Shalvin Narayan, Head of IT for Telecom Fiji, shares how the company is using Autonomous Database to produce real-time actionable insights in minutes rather than weeks.

embedBrightcove('responsive', false, 'single', '6086098044001');


Read More Stories from Oracle Cloud

Telecom Fiji is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes

Subquery

Dominic Brooks - Thu, 2019-10-24 07:11


with x as (select sysdate from dual)
select * from utter_bollocks.x;

Oracle Named a Leader in Translytical Data Platforms by Major Research Firm

Oracle Press Releases - Thu, 2019-10-24 07:00
Press Release
Oracle Named a Leader in Translytical Data Platforms by Major Research Firm According to report, customers like Oracle’s general data security capabilities, technical support and capability to support many workloads

Redwood Shores, Calif.—Oct 24, 2019

Today, Forrester Research named Oracle a leader in translytical data platforms, a database category which can handle a wide range of transactional and analytic workloads. “The Forrester Wave™: Translytical Data Platforms, Q4 2019” report1 cites that, “unlike other vendors, Oracle uses a dual-format database (row and columns for the same table) to deliver optimal translytical performance,” and that “customers like Oracle’s capability to support many workloads including OLTP, IoT, microservices, multimodel, data science, AI/ML, spatial, graph, and analytics.”

“Oracle is pleased to be acknowledged as a leader by Forrester for its translytical database capabilities,” said Juan Loaiza, Executive Vice President, Mission-Critical Database Technology, Oracle. “Oracle is setting the bar for the industry by offering a single converged database that brings together both transaction processing and analytic capabilities to provide our customers a mission-critical translytical data platform. Our converged database also provides built-in machine learning, spatial, graph, and JSON capabilities enabling our customers to easily run many kinds of workloads against the same data.”

The report states that Translytical is a hot, emerging market that delivers a unified data platform to support all kinds of workloads. The sweet spot is the ability to perform all of these workloads within a single database, leveraging innovation in in-memory, multimodel, distributed, and cloud architectures. Translytical databases can support various use cases including real-time insights, machine learning, streaming analytics, extreme transactional processing, and operational reporting.

Forrester reports on Oracle’s Translytical platform for Oracle environments. The report cites Oracle Database in-memory, an option that extends Oracle Database to support analytics in the same database as the one running transactions. Existing Oracle applications do not require any changes to the application in order to leverage Oracle Database in-memory. Unlike other vendors, Oracle uses a dual-format database (row and columnar representations for the same table) to deliver optimal translytical performance. In addition, Oracle leverages the Oracle Exadata appliance that supports a large-scale flash cache to perform fast in-memory and in-flash columnar operations. Customers like Oracle’s capability to support many workloads including OLTP, IoT, microservices, multimodel, data science, AI/ML, spatial, graph, and analytics.

According to the report, customers like the platform’s ease of use, ease of expanding existing Oracle applications to take advantage of translytics, general data security capabilities, and technical support, as well as the Cloud at Customer offering.

Download a complimentary copy of “The Forrester Wave™: Translytical Data Platforms, Q4 2019.”

[1] Source: “The Forrester Wave™: Translytical Data Platforms Q4 2019,” 23 October 2019.

Contact Info
Victoria Brown
Oracle
+1.650.850.2009
victoria.brown@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle's products may change and remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Victoria Brown

  • +1.650.850.2009

SEO for Single Page Oracle (SPA) JET Application

Andrejus Baranovski - Thu, 2019-10-24 02:49
This is a quick tip post related to Oracle JET.

SEO (search engine optimization) is a must for a public website or public-facing app. Search engines do indexing well for static HTML pages. Oracle JET on contrary follows SPA (single page application) approach with dynamic UI. This makes it harder for the search engines to build an index for such an app. The same applies to any other SPA implementations with Angular, React, Vue.js, etc.

If you want to build SEO support for Oracle JET, similar solutions as for Angular, React or Vue.js SPAs apps can be applied.

One of the simpler and reliable solutions - generate static HTML pages for Oracle JET SPA app. You can follow this article, it explains how the same can be done for Angular - SEO for Single Page Applications.

Clustering_Factor

Jonathan Lewis - Wed, 2019-10-23 15:56

A few days ago I published a little note of a script I wrote some time ago to estimate the clustering_factor of an index before it had been built. At the time I pointed out that one of its limitations was that it would not handle cases where you were planning to set the table_cached_blocks preference, but a couple of days later I decided that I’d write another version of the code that would cater for the new feature – and that’s how I made an embarrassing discovery.

Having reviewed a number of notes I’ve published about the table_cached_blocks preference and its impact on the clustering_factor I’ve realised the what I’ve written has always been open to two interpretations – the one that I had in mind as I was writing, and the correct one.  I made this discovery because I had written a simple SQL statement – using the match_recognize() mechanism – to do what I considered to  be the appropriate calculation. After testing the query with a few sets of sample data that produced the correct results I emailed Stew Ashton (my “go-to” person for match_recognize() questions) asking if he would do a sanity check on the code because it was rather slow and I wondered if there was a better way of writing it.

His reply was roughly:

“I’ve read the notes you and Richard Foote have written about the clustering_factor and table_cached_blocks, and this isn’t doing what your description says it should.”

Then he explained what he had inferred from what I had written … and it made more sense than what I had been thinking when I wrote it. He also supplied some code to implement his interpretation – so I designed a couple of data models that would produce the wrong prediction for whichever piece of code implemented the wrong interpretation. His code gave the right answers, mine didn’t.

So here’s the difference in interpretation – the wrong one first – using 16 as a discussion value for the table_cached_blocks:

  • WRONG interpretation:  As you walk through index entries in order remember the last 16 rowids (that’s rowid for the rows in the table that the index is pointing to) you’ve seen. If the current rowid has a block id component that doesn’t match the block id from one of the remembered 16 rowids then increment the counter for the clustering_factor.
    • The simplicity of this algorithm means you can fix a “circular” array of 16 entries and keep walking around the circle overwriting the oldest entry each time you read a new one. It’s a pity that it’s the wrong idea because there’s a simple (though massively CPU -intensive match_recognize() strategy for implementing it – and if you were using an internal library mechanism during a proper gather_index_stats() it could be incredibly efficient.
  • RIGHT interpretation: set up an array for 16 block ids, each with an associated “row-number”. Walk through the index in order – giving each entry a row-number as you go. Extract the block id from the current entry and search through the array for a matching block id.  If you find a match then update its entry with the current row-number (so you can remembr how recently you saw the block id); if you don’t find a match then replace the entry that has the smallest (i.e. greatest distance into the past) row-number with the current block id and row-number and increment the counter for the clustering_factor.

The first piece of code that Stew Ashton sent me was an anonymous PL/SQL block that included some hard-coded fragments and embedded SQL to use a test table and index that I had defined, but he then sent a second piece of code that creates a generic function that uses dynamic SQL to construct a query against a table and an index definition that you want to test. The latter is the code I’ve published (with permission) below:


create or replace function predict_clustering_factor(
/*
Function to predict the clustering factor of an index,
taking into account the intended value of
the TABLE_CACHED_BLOCKS parameter of DBMS_STATS.SET_TABLE_PREFS.

Input is the table name, the list of column names
and the intended value of TABLE_CACHED_BLOCKS.

The function collects the last N block ids (not the last N entries).
When there is no more room, it increments the clustering factor
and replaces the least recently used block id with the current one.

Note: here a "block id" is a rowid with the row_number portion set to 0.
It is effectively a "truncated" rowid.
*/
  p_table_name in varchar2,
  p_column_list in varchar2,
  p_table_cached_blocks in number
) return number authid current_user is

  rc sys_refcursor;
  type tt_rids is table of rowid;
  lt_rids tt_rids;
  
  type t_block_list is record(
    rid rowid,
    last_hit number
  );

  type tt_block_list is table of t_block_list;
  lt_block_list tt_block_list := new tt_block_list();

  l_rid rowid;
  l_clustering_factor number := 0;
  b_block_found boolean;
  l_rn number := 0;
  l_oldest_hit number;
  i_oldest_hit binary_integer := 0;
  
  function truncated_rid(p_rid in rowid) return rowid is
    rowid_type number;
    object_number NUMBER;
    relative_fno NUMBER;
    block_number NUMBER;
    row_number NUMBER;
    rid rowid;

  begin

    DBMS_ROWID.ROWID_INFO (
      p_rid,
      rowid_type,
      object_number,
      relative_fno,
      block_number,
      row_number
    );

    rid := DBMS_ROWID.ROWID_CREATE (
      rowid_type,
      object_number,
      relative_fno,
      block_number,
      0
    );

    return rid;

  end truncated_rid;
  
begin
  if p_table_cached_blocks != trunc(p_table_cached_blocks)
  or p_table_cached_blocks not between 1 and 255 then
    raise_application_error(
      -20001, 
      'input parameter p_table_cached_blocks must be an integer between 1 and 255'
    );
  end if;

  open rc for 'select rowid from '||p_table_name||' order by '||p_column_list||', rowid';
  loop
    fetch rc bulk collect into lt_rids limit 1000;

    for irid in 1..lt_rids.count loop
      l_rn := l_rn + 1;
      l_rid := truncated_rid(lt_rids(irid));
      b_block_found := false;
      l_oldest_hit := l_rn;

      if l_rn = 1 then
        l_clustering_factor := l_clustering_factor + 1;
        lt_block_list.extend;
        lt_block_list(1).rid := l_rid;
        lt_block_list(1).last_hit := l_rn;

      else

        for i in 1..lt_block_list.count loop
          if l_oldest_hit > lt_block_list(i).last_hit then
            l_oldest_hit := lt_block_list(i).last_hit;
            i_oldest_hit := i;
          end if;
          if lt_block_list(i).rid = l_rid then
            b_block_found := true;
            lt_block_list(i).last_hit := l_rn;
            exit;
          end if;
        end loop;

        if not b_block_found then
          l_clustering_factor := l_clustering_factor + 1;
          if lt_block_list.count < p_table_cached_blocks then
            lt_block_list.extend;
            lt_block_list(lt_block_list.count).rid := l_rid;
            lt_block_list(lt_block_list.count).last_hit := l_rn; 
          else         
            lt_block_list(i_oldest_hit).rid := l_rid;
            lt_block_list(i_oldest_hit).last_hit := l_rn;
          end if;
        end if;

      end if;

    end loop;
    exit when rc%notfound;
  end loop;

  close rc;
  return l_clustering_factor;

exception when others then
  if rc%isopen then
    close rc;
  end if;
  raise;

end predict_clustering_factor;
/

After executing the above to create the function, here’s an example of usage:

rem
rem     Script:         clustering_factor_est_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2019
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        cast(rownum as varchar2(10))            v1,
        trunc(dbms_random.value(0,10000))       rand,
        rpad('x',100,'x')                       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
/

-- -------------------------------------------------------------------

SQL> execute dbms_output.put_line('Predicted cf for t1(rand, id): ' || predict_clustering_factor('t1','rand, id',16))
Predicted cf for t1(rand, id): 997218
Elapsed: 00:00:07.54

SQL> execute dbms_output.put_line('Predicted cf for t1(rand, id): ' || predict_clustering_factor('t1','rand, id',255))
Predicted cf for t1(rand, id): 985607
Elapsed: 00:00:50.61

You’ll notice that the larger the setting for the “table_cached_blocks” parameter the more time it takes to predict the clustering_factor – and it was all CPU time in my example. This isn;t surprising given the need to search through an array holding the previous history. In this example the table t1 holds 1,000,000 rows, and the number and scatter of distinct values is so arranged that the code will hardly ever find a cached block id – essentially it’s the sort of index that isn’t going to cause much of confusion to the optimizer and isn’t likely to need special attention to make the optimizer use it when it should and ignore it when it’s inappropriate.

Finally a cut-n-paste to show the accuracy of the two predictions:

SQL> create index t1_i on t1(rand, id);
Elapsed: 00:00:02.96

SQL> execute dbms_stats.set_table_prefs(null,'t1','table_cached_blocks',16)
Elapsed: 00:00:00.01

SQL> execute dbms_stats.gather_index_stats(null,'t1_i')
Elapsed: 00:00:09.55

SQL> select clustering_factor from user_indexes where index_name = 'T1_I';

CLUSTERING_FACTOR
-----------------
           997218

Elapsed: 00:00:00.11

SQL> execute dbms_stats.set_table_prefs(null,'t1','table_cached_blocks',255)
Elapsed: 00:00:00.01

SQL> execute dbms_stats.gather_index_stats(null,'t1_i')
Elapsed: 00:00:07.80

SQL> select clustering_factor from user_indexes where index_name = 'T1_I';

CLUSTERING_FACTOR
-----------------
           985607

Elapsed: 00:00:00.00

Both match perfectly – but you might notice that creating the index and gathering the stats was much faster than predicting the clustering factor for the case where we set table_cached_blocks = 255.

(If you’re wondering, my “simple but irrelevant” match_recognize() query took 370 CPU second to complete for table_cached_blocks = 200 – and a limit on march_recognize() meant that 200 was the maximum value I was allowed to use – so now you know why I emailed Stew Ashton (and just for lagniappe. he also told me about a simple workaround for the 200 limit)).

 

 

Multiple SQL statements in a single Execute Immediate

Tom Kyte - Wed, 2019-10-23 15:46
What is the exact syntax to be able to execute multiple sql statements from within a single execute immediate statement.
Categories: DBA Blogs

Can I get a single table where each column consists of a query (each query returning three rows with ids)?

Tom Kyte - Wed, 2019-10-23 15:46
Hi, I have 27 different queries, all of which yield 3 rows max. Something like this: <code> SELECT CUSTOMER_NUMBER FROM customer WHERE name = 'SomeName' order by CUSTOMER_NUMBER fetch first 3 rows only; </code> and <code> SELECT CUSTOM...
Categories: DBA Blogs

Database Event Error Tracking

Tom Kyte - Wed, 2019-10-23 15:46
Hi Guys, I have a schema with multiple Schema objects like Procedures,Functions,Triggers and Packages. While testing via application if any DB error occurs we need to check the log to identify errors. Is there any way to create a single trig...
Categories: DBA Blogs

XML Aggregation

Tom Kyte - Wed, 2019-10-23 15:46
Consider: <code>with data as (select 'MH' initials, to_date('01092019','ddmmyyyy') cal_date, 23 quantity from dual union all select 'MH' initials, to_date('02092019','ddmmyyyy') cal_date, 18 quantity fro...
Categories: DBA Blogs

Remove a hint with DBMS_ADVANCED_REWRITE and DBMS_SQL_TRANSLATOR failed

Tom Kyte - Wed, 2019-10-23 15:46
Hello Masters, I am testing the two packages DBMS_ADVANCED_REWRITE and DBMS_SQL_TRANSLATOR and they work fine : for exemple I can remove an ORDER BY from a SELECT. But, there is always an exception, I have problems with Hints : I want to remove...
Categories: DBA Blogs

Constraining JSON Data

Tom Kyte - Wed, 2019-10-23 15:46
Hello AskTom-Team, is there a possibility to contrain and check JSON data? For example: <code>insert into test(questionnaire_id, var_val, year) '{ "f1": "2571", "f11": "38124", "f31": "332.64", "f4...
Categories: DBA Blogs

Oracle Health Sciences and Phlexglobal Collaborate to Reduce Operational Bottlenecks in Clinical Research

Oracle Press Releases - Wed, 2019-10-23 07:00
Press Release
Oracle Health Sciences and Phlexglobal Collaborate to Reduce Operational Bottlenecks in Clinical Research Integration aids timeliness and quality in regulatory compliance, enhancing audit readiness

Redwood Shores, Calif.—Oct 23, 2019

Oracle Health Sciences, a leader in eClinical technology and Phlexglobal, pioneers in the provision of Trial Master File (TMF) technology and services for the global life sciences industry, have announced enhanced integrations to accelerate the speed and accuracy of regulatory compliance and inspection readiness in clinical trials.

A TMF is a requirement that all sponsors and contract research organizations (CROs) must meet to assure that the rights, safety and well-being of trial subjects are protected, that their trials are being conducted in accordance with Good Clinical Practice (GCP) principles, and that the clinical trial data is credible to ensure audit readiness.

Phlexglobal’s PhlexEview TMF management system uses automation to streamline processes, has an open architecture to ensure easy data integration, and provides advanced functionality designed to better facilitate TMF Management.

The Oracle Health Sciences eClinical platform, integrated with the TMF, provides a comprehensive, proven and best-in-class clinical trial ecosystem for life science organizations. Oracle Health Sciences provides key integration components (i.e., data staging, business rule compliance, target data staging, and API processing) to ensure seamless and successful integrations with its applications.

Standards are Critical

Other eClinical solutions only support clinical trial functions in silos. The ability to bring together the collective institutional memory and the facilitation between systems and team members is critical, and addresses an unmet need. The challenges in integration of eClinical systems stem from the lack of standards.

Phlexglobal’s Chief Strategy Officer Karen Roy is highly active in the TMF Reference Model, an industry-led initiative to define these standards—the TMF Reference Model is now widely adopted as the TMF structure of choice. In addition, the group has developed an exchange mechanism for vendor interoperability. This critical initiative establishes a common language (both business and technical exchange mechanism) enabling different organizational departments and companies to automate the exchange of information, which satisfies end-to-end requirements and validation.

The adoption of industry standards is critical to plug-and-play coexistence in the eClinical ecosystem and what is required is a deep understanding of the right type of integration at the right time for the different integration points. The TMF Reference Model Exchange mechanism facilitates these integrations by providing a common language amongst all eClinical vendors. Eliminating inefficiencies and bottlenecks associated with disparate data silos is critical to the goal of reducing cycle times in clinical trials. Today, regulatory authorities are placing more emphasis on contemporariness and timeliness of the TMF content they receive from sponsors and CROs. To achieve this, it is critical to ensure and document most of the content quality, as specified in the ALCOA-C guidance, prior to the content being transferred to the TMF. Together, Oracle Health Sciences and Phlexglobal address this challenge. 

“Our integration with Oracle Health Sciences’ industry-leading applications will save organizations significant resources, as documents only have to be checked once early in the process, which will ensure that any issues can be found and corrected in a timely manner enhancing overall quality,” said John McNeill, CEO of Phlexglobal. “Our collaboration represents a leap forward in improving clinical trials and controlling runaway costs and timelines. Our life science customers will be able to leverage industry KPIs to support business process optimization and business intelligence in clinical study startup, strengthening our regulatory compliance and global trial oversight capabilities.”

“Adoption of industry standards is critical to plug-and-play coexistence in the eClinical ecosystem,” said James Streeter, global vice president life sciences product strategy, Oracle Health Sciences. “What’s required is a deep understanding of the right type of integration at the right time for the different integration points. The TMF Reference Model Exchange Mechanism facilitates these integrations by providing a common language amongst all eClinical vendors. We strongly support this initiative.”

Phlexglobal is a Silver level member of Oracle PartnerNetwork (OPN).

Phlexglobal and Oracle representatives will be available to discuss this collaboration at:

Contact Info
Chris Englerth
Phlexglobal
+1 215.622.8798
cenglerth@phlexglobal.com
Judi Palmer
Oracle
+1 650.506.0266
judi.palmer@oracle.com
About Phlexglobal

Phlexglobal are world-renowned thought leaders and specialists in the provision of electronic Trial Master File (eTMF) systems and services. They offer a unique combination of eTMF technology with PhlexEview – their market leading eTMF system, quality services and specialist resource that delivers a range of flexible, targeted solutions to meet your business needs. Phlexglobal is headquartered in the UK, with offices in Malvern, PA, Durham, NC, and Lublin, Poland.

About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to bring new drugs to market faster. As a leader in Life Sciences technology, Oracle Health Sciences is trusted by 30 of the top 30 pharma, 10 of the top 10 biotech and 10 of the top 10 CROs for managing clinical trials and pharmacovigilance around the globe.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Chris Englerth

  • +1 215.622.8798

Judi Palmer

  • +1 650.506.0266

Adding a dbvisit standby database on the ODA in a non-OMF environment

Yann Neuhaus - Wed, 2019-10-23 06:04

I have recently been working on a customer project where I had been challenged adding a dbvisit standby database on an ODA X7-2M, named ODA03. The existing customer environment was composed of Oracle Standard 12.2 version database. The primary database, myDB, is running on server named srv02 and using a non-OMF configuration. On the ODA side we are working with OMF configuration. The dbvisit version available at that time was version 8. You need to know that version 9 is currently the last one and brings some new cool features. Through this blog I would like to share with you my experience, the problem I have been facing and the solution I could put in place.

Preparing the instance on the ODA

First of all I have been creating an instance only database on the ODA.

root@ODA03 ~]# odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status   
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
ec33e32a-37d1-4d0d-8c40-b358dcf5660c     OraDB12201_home1     12.2.0.1.180717                          /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

[root@ODA03 ~]# odacli create-database -m -u myDB_03 -dn domain.name -n myDB -r ACFS -io -dh ec33e32a-37d1-4d0d-8c40-b358dcf5660c
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
                     ID:  96fd4d07-4604-4158-9c25-702c01f4493e
            Description:  Database service creation with db name: myDB
                 Status:  Created
                Created:  May 15, 2019 4:29:15 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@ODA03 ~]# odacli describe-job -i 96fd4d07-4604-4158-9c25-702c01f4493e

Job details
----------------------------------------------------------------
                     ID:  96fd4d07-4604-4158-9c25-702c01f4493e
            Description:  Database service creation with db name: myDB
                 Status:  Success
                Created:  May 15, 2019 4:29:15 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               May 15, 2019 4:29:16 PM CEST        May 15, 2019 4:29:16 PM CEST        Success
Creating volume datmyDB                    May 15, 2019 4:29:16 PM CEST        May 15, 2019 4:29:38 PM CEST        Success
Creating volume reco                     May 15, 2019 4:29:38 PM CEST        May 15, 2019 4:30:00 PM CEST        Success
Creating ACFS filesystem for DATA        May 15, 2019 4:30:00 PM CEST        May 15, 2019 4:30:17 PM CEST        Success
Creating ACFS filesystem for RECO        May 15, 2019 4:30:17 PM CEST        May 15, 2019 4:30:35 PM CEST        Success
Database Service creation                May 15, 2019 4:30:35 PM CEST        May 15, 2019 4:30:51 PM CEST        Success
Auxiliary Instance Creation              May 15, 2019 4:30:35 PM CEST        May 15, 2019 4:30:47 PM CEST        Success
password file creation                   May 15, 2019 4:30:47 PM CEST        May 15, 2019 4:30:49 PM CEST        Success
archive and redo log location creation   May 15, 2019 4:30:49 PM CEST        May 15, 2019 4:30:49 PM CEST        Success
updating the Database version            May 15, 2019 4:30:49 PM CEST        May 15, 2019 4:30:51 PM CEST        Success

Next steps are really common DBA operations :

  • Create a pfile from the current primary database
  • Transfer the pfile to the ODA
  • Update the pfile as needed (path, db_unique_name, …)
  • Create a spfile from the pfile on the new ODA database
  • Apply ODA specific instance parameters
  • Copy or create the password file with same password

The parameters that are mandatory to be set on the ODA instance are the following :
*.db_create_file_dest=’/u02/app/oracle/oradata/myDB_03′
*.db_create_online_log_dest_1=’/u03/app/oracle/redo’
*.db_recovery_file_dest=’/u03/app/oracle/fast_recovery_area’

Also all the convert parameters should be removed. Using convert parameter is incompatible with OMF.

Creating the standby database Using dbvisit

I first tried to use dbvisit to create the standby database.

As usual and common dbvisit operation, I first created the DDC configuration file from the primary server :

oracle@srv02:/u01/app/dbvisit/standby/ [myDB] ./dbvctl -o setup
...
...
...
Below are the list of configuration variables provided during the setup process:

Configuration Variable             Value Provided
======================             ==============
ORACLE_SID                         myDB
ORACLE_HOME                        /opt/oracle/product/12.2.0

SOURCE                             srv02
ARCHSOURCE                         /u03/app/oracle/dbvisit_arch/myDB
RAC_DR                             N
USE_SSH                            N
DESTINATION                        ODA03
NETPORT                            7890
DBVISIT_BASE_DR                    /u01/app/dbvisit
ORACLE_HOME_DR                     /u01/app/oracle/product/12.2.0.1/dbhome_1
DB_UNIQUE_NAME_DR                  myDB_03
ARCHDEST                           /u03/app/oracle/dbvisit_arch/myDB
ORACLE_SID_DR                      myDB
ENV_FILE                           myDBSTD1

Are these variables correct?  [Yes]:
...
...
...

I then used this DDC configuration file to create the standby database :

oracle@srv02:/u01/app/dbvisit/standby/ [myDB] ./dbvctl -d myDBSTD1 --csd


-------------------------------------------------------------------------------

INIT ORA PARAMETERS
-------------------------------------------------------------------------------
*              audit_file_dest                         /u01/app/oracle/admin/myDB/adump
*              compatible                              12.2.0
*              control_management_pack_access          NONE
*              db_block_size                           8192
*              db_create_file_dest                     /u02/app/oracle/oradata/myDB_03
*              db_create_online_log_dest_1             /u03/app/oracle/redo
*              db_domain
*              db_name                                 myDB
*              db_recovery_file_dest                   /u03/app/oracle/fast_recovery_area
*              db_recovery_file_dest_size              240G
*              db_unique_name                          myDB_03
*              diagnostic_dest                         /u01/app/oracle
*              dispatchers                             (PROTOCOL=TCP) (SERVICE=myDBXDB)
*              instance_mode                           READ-WRITE
*              java_pool_size                          268435456
*              log_archive_dest_1                      LOCATION=USE_DB_RECOVERY_FILE_DEST
*              open_cursors                            3000
*              optimizer_features_enable               12.2.0.1
*              pga_aggregate_target                    4194304000
*              processes                               8000
*              remote_login_passwordfile               EXCLUSIVE
*              resource_limit                          TRUE
*              sessions                                7552
*              sga_max_size                            53687091200
*              sga_target                              26843545600
*              shared_pool_reserved_size               117440512
*              spfile                                  OS default
*              statistics_level                        TYPICAL
*              undo_retention                          300
*              undo_tablespace                         UNDOTBS1

-------------------------------------------------------------------------------

Status: VALID

What would you like to do:
   1 - Create standby database using existing saved template
   2 - View content of existing saved template
   3 - Return to the previous menu
   Please enter your choice [1]:

This operation failed with following errors :

Cannot create standby data or temp file /usr/oracle/oradata/myDB/myDB_bi_temp01.dbf for
primary file /usr/oracle/oradata/myDB/myDB_bi_temp01.dbf as location /usr/oracle/oradata/myDB
does not exist on the standby.

A per dbvisit documentation, dbvisit standby is certified ODA and fully compatible with non-OMF and OMF databases. This is correct, the only distinction is that the full environment needs to be in same configuration. That’s to say that if the primary is OMF, the standby is expected to be OMF. If the primary is running a non-OMF configuration, the standby should be using non-OMF as well.

Using RMAN

I decided to duplicate the database using RMAN and a backup that I transferred locally on the ODA. The backup was the previous nightly inc0 backup. Before running the rman duplication I executed a last archive log backup to make sure to have the most recent archive used in the duplication.

I’m taking this opportunity to highlight that, thanks to ODA NVMe technology, the duplication of the 3 TB database without multiple channel (standard edition) took a bit more than 2 hours only. On the existing servers this took about 10 hours.

I added following tns entry in the tnsnames.ora.

myDBSRV3 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ODA03.domain.name)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = myDB)
      (UR = A)
    )
  )

Of course I could have been using a local connection.

I made sure the database to be in nomount status and ran the rman duplication :

oracle@ODA03:/opt/oracle/backup/ [myDB] rmanh

Recovery Manager: Release 12.2.0.1.0 - Production on Mon May 20 13:24:29 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect auxiliary sys@myDBSRV3

auxiliary database Password:
connected to auxiliary database: myDB (not mounted)

RMAN> run {
2> duplicate target database for standby dorecover backup location '/opt/oracle/backup/myDB';
3> }

Starting Duplicate Db at 20-MAY-2019 13:25:51

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/u03/app/oracle/redo/myDB_03/controlfile/o1_mf_gg4qvpnn_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   restore clone standby controlfile from  '/opt/oracle/backup/myDB/ctl_myDB_myDB_s108013_p1_newbak.ctl';
}
executing Memory Script

sql statement: alter system set  control_files =   ''/u03/app/oracle/redo/myDB_03/controlfile/o1_mf_gg4qvpnn_.ctl'' comment= ''Set by RMAN'' scope=spfile

Starting restore at 20-MAY-2019 13:25:51
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=9186 device type=DISK

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u03/app/oracle/redo/myDB_03/controlfile/o1_mf_gg4qvpnn_.ctl
Finished restore at 20-MAY-2019 13:25:52

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=9186 device type=DISK

contents of Memory Script:
{
   set until scn  49713361973;
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   switch clone tempfile all;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  2 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  10 to new;
   set newname for clone datafile  11 to new;
   set newname for clone datafile  12 to new;
   set newname for clone datafile  13 to new;
   set newname for clone datafile  14 to new;
   set newname for clone datafile  15 to new;
   set newname for clone datafile  16 to new;
   set newname for clone datafile  17 to new;
   set newname for clone datafile  18 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 2 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lx_bi_te_%u_.tmp in control file

executing command: SET NEWNAME

...
...
...

executing command: SET NEWNAME

Starting restore at 20-MAY-2019 13:25:57
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lxdataid_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_renderz2_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00007 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lx_ods_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00008 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00013 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_renderzs_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00015 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lx_stagi_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/backup/myDB/inc0_myDB_s107963_p1
...
...
...
archived log file name=/opt/oracle/backup/myDB/1_58043_987102791.dbf thread=1 sequence=58043
archived log file name=/opt/oracle/backup/myDB/1_58044_987102791.dbf thread=1 sequence=58044
archived log file name=/opt/oracle/backup/myDB/1_58045_987102791.dbf thread=1 sequence=58045
archived log file name=/opt/oracle/backup/myDB/1_58046_987102791.dbf thread=1 sequence=58046
archived log file name=/opt/oracle/backup/myDB/1_58047_987102791.dbf thread=1 sequence=58047
archived log file name=/opt/oracle/backup/myDB/1_58048_987102791.dbf thread=1 sequence=58048
archived log file name=/opt/oracle/backup/myDB/1_58049_987102791.dbf thread=1 sequence=58049
archived log file name=/opt/oracle/backup/myDB/1_58050_987102791.dbf thread=1 sequence=58050
media recovery complete, elapsed time: 00:12:40
Finished recover at 20-MAY-2019 16:06:22
Finished Duplicate Db at 20-MAY-2019 16:06:39

I could check and see that my standby database has been successfully created on the ODA :

oracle@ODA03:/u01/app/oracle/local/dmk/etc/ [myDB] myDB
********* dbi services Ltd. *********
STATUS                 : MOUNTED
DB_UNIQUE_NAME         : myDB_03
OPEN_MODE              : MOUNTED
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PHYSICAL STANDBY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
CDB Enabled            : NO
*************************************

As a personal note, I really found using oracle RMAN more convenient to duplicate a database. Albeit dbvisit script and tool is really stable, I think that this will give you more flexibility.

Registering the database in the grid cluster

As next step I registered the database in the grid.

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [LX] srvctl add database -db MyDB_03 -oraclehome /u01/app/oracle/product/12.2.0.1/dbhome_1 -dbtype SINGLE -instance MyDB -domain team-w.local -spfile /u02/app/oracle/oradata/MyDB_03/dbs/spfileMyDB.ora -pwfile /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/orapwMyDB -role PHYSICAL_STANDBY -startoption MOUNT -stopoption IMMEDIATE -dbname MyDB -node ODA03 -acfspath "/u02/app/oracle/oradata/MyDB_03,/u03/app/oracle"

I stopped the database :

SQL> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

And started it again with the grid infrastructure :

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] MyDB
********* dbi services Ltd. *********
STATUS          : STOPPED
*************************************

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] srvctl status database -d MyDB_03
Instance MyDB is not running on node ODA03

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] srvctl start database -d MyDB_03

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] srvctl status database -d MyDB_03
Instance MyDB is running on node ODA03
dbvisit synchronization

We now have our standby database created on the ODA. We just need to synchronize it with the primary.

Run a gap report

Executing a gap report, we can see that the newly created database is running almost 4 hours behind.

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d myDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 321953)
dbvctl started on srv02: Mon May 20 16:24:35 2019
=============================================================


Dbvisit Standby log gap report for myDB thread 1 at 201905201624:
-------------------------------------------------------------
Destination database on ODA03 is at sequence: 58050.
Source database on srv02 is at log sequence: 58080.
Source database on srv02 is at archived log sequence: 58079.
Dbvisit Standby last transfer log sequence: .
Dbvisit Standby last transfer at: .

Archive log gap for thread 1:  29.
Transfer log gap for thread 1: 58079.
Standby database time lag (DAYS-HH:MI:SS): +03:39:01.


=============================================================
dbvctl ended on srv02: Mon May 20 16:24:40 2019
=============================================================
Send the archive logs from primary to the standby database

I have been shipping the last archive logs from the primary database to the newly created standby.

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d myDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 326409)
dbvctl started on srv02: Mon May 20 16:29:14 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 30. Transfer log gap: 58080
>>> Sending heartbeat message... skipped
>>> First time Dbvisit Standby runs, Dbvisit Standby configuration will be copied to
    ODA03...
>>> Transferring Log file(s) from myDB on srv02 to ODA03 for thread 1:

    thread 1 sequence 58051 (1_58051_987102791.dbf)
    thread 1 sequence 58052 (1_58052_987102791.dbf)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.dbf)
    thread 1 sequence 58080 (1_58080_987102791.dbf)

=============================================================
dbvctl ended on srv02: Mon May 20 16:30:50 2019
=============================================================
Apply archive logs on the standby database

Then I could finally apply the archive logs on the standby database.

oracle@ODA03:/u01/app/dbvisit/standby/ [myDB] ./dbvctl -d myDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 21504)
dbvctl started on ODA03: Mon May 20 16:33:42 2019
=============================================================

>>> Sending heartbeat message... skipped

>>> Applying Log file(s) from srv02 to myDB on ODA03:

    thread 1 sequence 58051 (1_58051_987102791.arc)
    thread 1 sequence 58052 (1_58052_987102791.arc)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.arc)
    thread 1 sequence 58080 (1_58080_987102791.arc)
    Last applied log(s):
    thread 1 sequence 58080

    Next SCN required for recovery 49719323442 generated at 2019-05-20:16:27:09 +02:00.
    Next required log thread 1 sequence 58081

=============================================================
dbvctl ended on ODA03: Mon May 20 16:36:52 2019
=============================================================
Run a gap report

Running a new gap report, we can see that there is no delta between the primary and the standby database.

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d myDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 335068)
dbvctl started on srv02: Mon May 20 16:37:53 2019
=============================================================


Dbvisit Standby log gap report for myDB thread 1 at 201905201637:
-------------------------------------------------------------
Destination database on ODA03 is at sequence: 58081.
Source database on srv02 is at log sequence: 58082.
Source database on srv02 is at archived log sequence: 58081.
Dbvisit Standby last transfer log sequence: 58081.
Dbvisit Standby last transfer at: 2019-05-20 16:37:36.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:01.


=============================================================
dbvctl ended on srv02: Mon May 20 16:37:57 2019
=============================================================
Preparing the database for switchover

Are we done? Absolutely not. In order to be able to successfully perform a switchover, 3 main modifications are mandatory on the non-ODA Server (running non-OMF database) :

  • The future database files should be OMF
  • The online redo log should be newly created
  • The temporary file should be newly created

Otherwise you might end with unsuccessfull switchover having below errors :

>>> Starting Switchover between srv02 and ODA03

Running pre-checks       ... failed
No rollback action required

>>> Database on server srv02 is still a Primary Database
>>> Database on server ODA03 is still a Standby Database


<<<>>>
PID:40386
TRACEFILE:40386_dbvctl_switchover_myDBSTD1_201905272153.trc
SERVER:srv02
ERROR_CODE:1
Remote execution error on ODA03.

====================Remote Output start: ODA03=====================
<<<>>>
PID:92292
TRACEFILE:92292_dbvctl_f_gs_get_info_standby_myDBSTD1_201905272153.trc
SERVER:ODA03
ERROR_CODE:2146
Dbvisit Standby cannot proceed:
Cannot create standby data or temp file /usr/oracle/oradata/myDB/temp01.dbf for primary
file /usr/oracle/oradata/myDB/temp01.dbf as location /usr/oracle/oradata/myDB does not
exist on the standby.
Cannot create standby data or temp file /usr/oracle/oradata/myDB/lx_bi_temp01.dbf for
primary file /usr/oracle/oradata/myDB/lx_bi_temp01.dbf as location /usr/oracle/oradata/myDB
does not exist on the standby.
Review the following standby database parameters:
        db_create_file_dest = /u02/app/oracle/oradata/myDB_03
        db_file_name_convert =
>>>> Dbvisit Standby terminated <<<>>> Dbvisit Standby terminated <<<<
Having new OMF configuration

There is no need to convert the full database into OMF. A database can run having both file naming configuration, non-OMF and OMF. We just need to have the database working now with OMF configuration. For this we will just apply the appropriate value to the init parameter. In my case the existing primary database was storing all data and redo files in the /opt/oracle/oradata directory.

SQL> alter system set DB_CREATE_FILE_DEST='/opt/oracle/oradata' scope=both;

System wurde geändert.

SQL> alter system set DB_CREATE_ONLINE_LOG_DEST_1='/opt/oracle/oradata' scope=both;

System wurde geändert.
Refresh the online log

We will create new OMF redo log files as described below.

The current redo log configuration :

SQL> select v$log.group#, member, v$log.status from v$logfile, v$log where v$logfile.group#=v$log.group#;

    GROUP# MEMBER                                             STATUS
---------- -------------------------------------------------- ----------
        12 /opt/oracle/oradata/myDB/redo12.log                  ACTIVE
        13 /opt/oracle/oradata/myDB/redo13.log                  CURRENT
        15 /opt/oracle/oradata/myDB/redo15.log                  INACTIVE
        16 /opt/oracle/oradata/myDB/redo16.log                  INACTIVE
         1 /opt/oracle/oradata/myDB/redo1.log                   INACTIVE
         2 /opt/oracle/oradata/myDB/redo2.log                   INACTIVE
        17 /opt/oracle/oradata/myDB/redo17.log                  INACTIVE
        18 /opt/oracle/oradata/myDB/redo18.log                  INACTIVE
        19 /opt/oracle/oradata/myDB/redo19.log                  INACTIVE
        20 /opt/oracle/oradata/myDB/redo20.log                  INACTIVE
         3 /opt/oracle/oradata/myDB/redo3.log                   INACTIVE
         4 /opt/oracle/oradata/myDB/redo4.log                   INACTIVE
         5 /opt/oracle/oradata/myDB/redo5.log                   INACTIVE
         6 /opt/oracle/oradata/myDB/redo6.log                   INACTIVE
         7 /opt/oracle/oradata/myDB/redo7.log                   INACTIVE
         8 /opt/oracle/oradata/myDB/redo8.log                   ACTIVE
         9 /opt/oracle/oradata/myDB/redo9.log                   ACTIVE
        10 /opt/oracle/oradata/myDB/redo10.log                  ACTIVE
        11 /opt/oracle/oradata/myDB/redo11.log                  ACTIVE
        14 /opt/oracle/oradata/myDB/redo14.log                  INACTIVE

For all INACTIVE redo log groups, we will be able to drop the group and create it again with the OMF naming convention :

SQL> alter database drop logfile group 1;

Datenbank wurde geändert.

SQL> alter database add logfile group 1;

Datenbank wurde geändert.

In order to move to the next redo group and release the current one, we will run a switch log file :

SQL> alter system switch logfile;

System wurde geändert.

To move the ACTIVE redo log to INACTIVE we will run a checkpoint :

SQL> alter system checkpoint;

System wurde geändert.

And then drop and recreate the last INACTIVE redo groups :

SQL> alter database drop logfile group 10;

Datenbank wurde geändert.

SQL> alter database add logfile group 10;

Datenbank wurde geändert.

To finally have all our online log with OMF format :

SQL> select v$log.group#, member, v$log.status from v$logfile, v$log where v$logfile.group#=v$log.group# order by group#;

    GROUP# MEMBER                                                       STATUS
---------- ------------------------------------------------------------ ----------
         1 /opt/oracle/oradata/myDB/onlinelog/o1_mf_1_ggqx5zon_.log       INACTIVE
         2 /opt/oracle/oradata/myDB/onlinelog/o1_mf_2_ggqxjky2_.log       INACTIVE
         3 /opt/oracle/oradata/myDB/onlinelog/o1_mf_3_ggqxjodl_.log       INACTIVE
         4 /opt/oracle/oradata/myDB/onlinelog/o1_mf_4_ggqxkddc_.log       INACTIVE
         5 /opt/oracle/oradata/myDB/onlinelog/o1_mf_5_ggqxkj1t_.log       INACTIVE
         6 /opt/oracle/oradata/myDB/onlinelog/o1_mf_6_ggqxkmnm_.log       CURRENT
         7 /opt/oracle/oradata/myDB/onlinelog/o1_mf_7_ggqxn373_.log       UNUSED
         8 /opt/oracle/oradata/myDB/onlinelog/o1_mf_8_ggqxn7b3_.log       UNUSED
         9 /opt/oracle/oradata/myDB/onlinelog/o1_mf_9_ggqxnbxd_.log       UNUSED
        10 /opt/oracle/oradata/myDB/onlinelog/o1_mf_10_ggqxvlbf_.log      UNUSED
        11 /opt/oracle/oradata/myDB/onlinelog/o1_mf_11_ggqxvnyg_.log      UNUSED
        12 /opt/oracle/oradata/myDB/onlinelog/o1_mf_12_ggqxvqyp_.log      UNUSED
        13 /opt/oracle/oradata/myDB/onlinelog/o1_mf_13_ggqxvv2o_.log      UNUSED
        14 /opt/oracle/oradata/myDB/onlinelog/o1_mf_14_ggqxxcq7_.log      UNUSED
        15 /opt/oracle/oradata/myDB/onlinelog/o1_mf_15_ggqxxgfg_.log      UNUSED
        16 /opt/oracle/oradata/myDB/onlinelog/o1_mf_16_ggqxxk67_.log      UNUSED
        17 /opt/oracle/oradata/myDB/onlinelog/o1_mf_17_ggqxypwg_.log      UNUSED
        18 /opt/oracle/oradata/myDB/onlinelog/o1_mf_18_ggqy1z78_.log      UNUSED
        19 /opt/oracle/oradata/myDB/onlinelog/o1_mf_19_ggqy2270_.log      UNUSED
        20 /opt/oracle/oradata/myDB/onlinelog/o1_mf_20_ggqy26bj_.log      UNUSED

20 Zeilen ausgewählt.
Refresh temporary file

The database was using 2 temp tablespaces : TEMP and MyDB_BI_TEMP.

We first need to add new temp files in OMF format for both tablespaces.

SQL> alter tablespace TEMP add tempfile size 20G;

Tablespace wurde geändert.

SQL> alter tablespace myDB_BI_TEMP add tempfile size 20G;

Tablespace wurde geändert.

Both tablespace will now include 2 files : a previous non-OMF one and a new OMF one :

SQL> @qdbstbsinf.sql
Enter a tablespace name filter (US%): TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
TEMP                 /opt/oracle/oradata/myDB/datafile/o1_mf_temp_ggrjzm9o_.tmp     ONLINE               20480 NO                    0
TEMP                 /usr/oracle/oradata/myDB/temp01.dbf                            ONLINE               20480 NO                    0

SQL> @qdbstbsinf.sql
Enter a tablespace name filter (US%): myDB_BI_TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
myDB_BI_TEMP           /opt/oracle/oradata/myDB/datafile/o1_mf_lx_bi_te_ggrk0wxz_.tmp ONLINE               20480 NO                    0
myDB_BI_TEMP           /usr/oracle/oradata/myDB/lx_bi_temp01.dbf                      ONLINE               20480 YES                5120

Dropping temporary file will end into error :

SQL> alter database tempfile '/usr/oracle/oradata/myDB/temp01.dbf' drop including datafiles;
alter database tempfile '/usr/oracle/oradata/myDB/temp01.dbf' drop including datafiles
*
FEHLER in Zeile 1:
ORA-25152: TEMPFILE kann momentan nicht gelöscht werden

We need to restart the database. This will only be possible during the maintenance windows scheduled to run the switchover.

SQL> shutdown immediate;
Datenbank geschlossen.
Datenbank dismounted.
ORACLE-Instanz heruntergefahren.

SQL> startup
ORACLE-Instanz hochgefahren.

Total System Global Area 5,3687E+10 bytes
Fixed Size                 26330584 bytes
Variable Size            3,3152E+10 bytes
Database Buffers         2,0401E+10 bytes
Redo Buffers              107884544 bytes
Datenbank mounted.
Datenbank geöffnet.

The previous non-OMF temporary file can now be deleted :

SQL>  alter database tempfile '/usr/oracle/oradata/myDB/temp01.dbf' drop including datafiles;

Datenbank wurde geändert.

SQL> alter database tempfile '/usr/oracle/oradata/myDB/lx_bi_temp01.dbf' drop including datafiles;

Datenbank wurde geändert.

And we only have OMF temporary files now :

SQL>  @qdbstbsinf.sql
Enter a tablespace name filter (US%): TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
TEMP                 /opt/oracle/oradata/myDB/datafile/o1_mf_temp_ggrjzm9o_.tmp     ONLINE               20480 NO                    0

SQL>  @qdbstbsinf.sql
Enter a tablespace name filter (US%): myDB_BI_TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
myDB_BI_TEMP           /opt/oracle/oradata/myDB/datafile/o1_mf_lx_bi_te_ggrk0wxz_.tmp ONLINE               20480 NO                    0

Testing switchover

We are now ready to test the switchover from current srv02 primary to ODA03 server after making sure both databases are synchronized.

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 12196)
dbvctl started on srv02: Tue May 28 00:07:34 2019
=============================================================

>>> Starting Switchover between srv02 and ODA03

Running pre-checks       ... done
Pre processing           ... done
Processing primary       ... done
Processing standby       ... done
Converting standby       ... done
Converting primary       ... done
Completing               ... done
Synchronizing            ... done
Post processing          ... done

>>> Graceful switchover completed.
    Primary Database Server: ODA03
    Standby Database Server: srv02

>>> Dbvisit Standby can be run as per normal:
    dbvctl -d MyDBSTD1


PID:12196
TRACE:12196_dbvctl_switchover_MyDBSTD1_201905280007.trc

=============================================================
dbvctl ended on srv02: Tue May 28 00:13:31 2019
=============================================================
Conclusion

With dbvisit standby it is possible to mix non-OMF and OMF databases after completing several manual steps. The final recommendation would be to run a unique configuration. This is why, after having run a switchover to the new ODA03 database, and making sure the new database is stable, we created from scratch the old primary srv02 database with OMF configuration. Converting a database to OMF using move option is not possible with standard edition.

Cet article Adding a dbvisit standby database on the ODA in a non-OMF environment est apparu en premier sur Blog dbi services.

Having multiple standby databases and cascading with dbvisit

Yann Neuhaus - Wed, 2019-10-23 05:47

Dbvisit standy is a disaster recovery solution that you will be able to use with Oracle standard edition. I have been working on a customer project where I had to setup a system having one primary and two standby databases. One of the standby database had to run with a gap of 24 hours. Knowing that flashback possibilities are very limited on standard edition, this would give customer the ability to extract and restore some data been wrongly lost following human errors.

The initial configuration would be the following one :

Database instance, db_name : MyDB
MyDB_02 (db_unique_name) primary database running on srv02 server.
MyDB_01 (db_unique_name) expected standby database running on srv01 server.
MyDB_03 (db_unique_name) expected standby database running on srv03 server.

The following DDC configuration file will be used :
MyDBSTD1 : Configuration file for first standby been synchronized every 10 minutes.
MyDBSTD2 : Configuration file for second standby been synchronized every 24 hours.

Let me walk you through the steps to setup such configuration. This article is not intended to show the whole process of implementing a dbvisit solution, but only the steps required to work with multiple standby. We will also talk about how we can implement cascaded standby and apply lag delay within dbvisit.

Recommendations

In order to limit the manual configuration changes in the DDC file after a switchover, it is recommended to use as much as possible same ORACLE_HOME, ARCHIVE Destination and DBVISIT home directory.

Creating MyDBSTD1 DDC configuration file

The first standby configuration file will be created and used between MyDB_03 (srv03) and MyDB_02 (srv02).

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -o setup


=========================================================

     Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd)
           http://www.dbvisit.com

=========================================================

=>dbvctl only needs to be run on the primary server.

Is this the primary server?  [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:

     DDC
     ===
1)   Create New DDC
2)   Cancel

Please enter choice [] : 1

Is this correct?  [Yes]:

...
...
...

Below are the list of configuration variables provided during the setup process:

Configuration Variable             Value Provided
======================             ==============
ORACLE_SID                         MyDB
ORACLE_HOME                        /opt/oracle/product/12.2.0

SOURCE                             srv02
ARCHSOURCE                         /u03/app/oracle/dbvisit_arch/MyDB
RAC_DR                             N
USE_SSH                            N
DESTINATION                        srv03
NETPORT                            7890
DBVISIT_BASE_DR                    /u01/app/dbvisit
ORACLE_HOME_DR                     /u01/app/oracle/product/12.2.0.1/dbhome_1
DB_UNIQUE_NAME_DR                  MyDB_03
ARCHDEST                           /u03/app/oracle/dbvisit_arch/MyDB
ORACLE_SID_DR                      MyDB
ENV_FILE                           MyDBSTD1

Are these variables correct?  [Yes]:

>>> Dbvisit Database configuration (DDC) file MyDBSTD1 created.

>>> Dbvisit Database repository (DDR) MyDB created.
   Repository Version          8.4
   Software Version            8.4
   Repository Status           VALID


Do you want to enter license key for the newly created Dbvisit Database configuration (DDC) file?  [Yes]:

Enter license key and press Enter: []: XXXXXXXXXXXXXXXXXXXXXXXXXXX
>>> Dbvisit Standby License
License Key     : XXXXXXXXXXXXXXXXXXXXXXXXXXX
customer_number : XXXXXX
dbname          : MyDB
expiry_date     : 2099-05-06
product_id      : 8
sequence        : 1
status          : VALID
updated         : YES

PID:423545
TRACE:dbvisit_install.log
Synchronizing both MyDB_02 and MyDB_03 Shippping logs from primary to standby
oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d MyDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 326409)
dbvctl started on srv02: Mon May 20 16:29:14 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 30. Transfer log gap: 58080
>>> Sending heartbeat message... skipped
>>> First time Dbvisit Standby runs, Dbvisit Standby configuration will be copied to
    srv03...
>>> Transferring Log file(s) from MyDB on srv02 to srv03 for thread 1:

    thread 1 sequence 58051 (1_58051_987102791.dbf)
    thread 1 sequence 58052 (1_58052_987102791.dbf)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.dbf)
    thread 1 sequence 58080 (1_58080_987102791.dbf)

=============================================================
dbvctl ended on srv02: Mon May 20 16:30:50 2019
=============================================================
Applying log on standby database
oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 21504)
dbvctl started on srv03: Mon May 20 16:33:42 2019
=============================================================

>>> Sending heartbeat message... skipped

>>> Applying Log file(s) from srv02 to MyDB on srv03:

    thread 1 sequence 58051 (1_58051_987102791.arc)
    thread 1 sequence 58052 (1_58052_987102791.arc)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.arc)
    thread 1 sequence 58080 (1_58080_987102791.arc)
    Last applied log(s):
    thread 1 sequence 58080

    Next SCN required for recovery 49719323442 generated at 2019-05-20:16:27:09 +02:00.
    Next required log thread 1 sequence 58081

=============================================================
dbvctl ended on srv03: Mon May 20 16:36:52 2019
=============================================================
Running a gap report
oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 335068)
dbvctl started on srv02: Mon May 20 16:37:53 2019
=============================================================


Dbvisit Standby log gap report for MyDB thread 1 at 201905201637:
-------------------------------------------------------------
Destination database on srv03 is at sequence: 58081.
Source database on srv02 is at log sequence: 58082.
Source database on srv02 is at archived log sequence: 58081.
Dbvisit Standby last transfer log sequence: 58081.
Dbvisit Standby last transfer at: 2019-05-20 16:37:36.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:01.


=============================================================
dbvctl ended on srv02: Mon May 20 16:37:57 2019
=============================================================
Switchover to srv03

At that time in the project we did a switchover to the newly created srv03 in order to test its stability. The switchover has been performed as described below, but this step is not mandatory when implementing several standby databases. As best practices, we will always test the first configuration by running a switchover before moving forward.

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 12196)
dbvctl started on srv02: Tue May 28 00:07:34 2019
=============================================================

>>> Starting Switchover between srv02 and srv03

Running pre-checks       ... done
Pre processing           ... done
Processing primary       ... done
Processing standby       ... done
Converting standby       ... done
Converting primary       ... done
Completing               ... done
Synchronizing            ... done
Post processing          ... done

>>> Graceful switchover completed.
    Primary Database Server: srv03
    Standby Database Server: srv02

>>> Dbvisit Standby can be run as per normal:
    dbvctl -d MyDBSTD1


PID:12196
TRACE:12196_dbvctl_switchover_MyDBSTD1_201905280007.trc

=============================================================
dbvctl ended on srv02: Tue May 28 00:13:31 2019
=============================================================

srv03 is now the new primary and srv02 a new standby database.

Creating MyDBSTD2 DDC configuration file

Once myDB_01 standby database is up and running, we can create its related DDC configuration file. To do so, we simply copy previous DDC configuration file, MyDBSTD1, and update it as needed.

I first transferred the file from current primary srv03 to new standby server srv01 :

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] scp dbv_MyDBSTD1.env oracle@srv01:$PWD
dbv_MyDBSTD1.env		100% 	23KB 	22.7KB/s 		00:00

I copied it into the new DDC configuration file name :

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] cp dbv_MyDBSTD1.env dbv_MyDBSTD2.env

I updated new DDC configuration accordingly to have :

  • DESTINATION as srv01 instead of srv02
  • DB_UNIQUE_NAME_DR as MyDB_01 instead of MyDB_02
  • MAILCFG to see the alerts coming from STD2 configuration.

The primary will remain the same : srv03.

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] vi dbv_MyDBSTD2.env

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] diff dbv_MyDBSTD1.env dbv_MyDBSTD2.env
86c86
DESTINATION = srv02
---
DESTINATION = srv01
93c93
DB_UNIQUE_NAME_DR = MyDB
---
DB_UNIQUE_NAME_DR = MyDB_01
135,136c135,136
MAILCFG_FROM = dbvisit_conf_1@domain.name MAILCFG_FROM_DR = dbvisit_conf_1@domain.name
---
MAILCFG_FROM = dbvisit_conf_2@domain.name
MAILCFG_FROM_DR = dbvisit_conf_2@domain.name

In case the ORACLE_HOME and ARCHIVE destination are not the same, these parameters will have to be updated as well.

Synchronizing both MyDB_03 and MyDB_01 Shippping logs from primary to standby
oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 25914)
dbvctl started on srv03: Wed Jun  5 20:32:09 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 383. Transfer log gap: 67385
>>> Sending heartbeat message... done
>>> First time Dbvisit Standby runs, Dbvisit Standby configuration will be copied to
    srv01...
>>> Transferring Log file(s) from MyDB on srv03 to srv01 for thread 1:

    thread 1 sequence 67003 (o1_mf_1_67003_ghgwj0z2_.arc)
    thread 1 sequence 67004 (o1_mf_1_67004_ghgwmj1w_.arc)
...
...
...
    thread 1 sequence 67384 (o1_mf_1_67384_ghj2fbgj_.arc)
    thread 1 sequence 67385 (o1_mf_1_67385_ghj2g883_.arc)

=============================================================
dbvctl ended on srv03: Wed Jun  5 20:42:05 2019
=============================================================
Applying log on standby database
oracle@srv01:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 69764)
dbvctl started on srv01: Wed Jun  5 20:42:45 2019
=============================================================

>>> Sending heartbeat message... done

>>> Applying Log file(s) from srv03 to MyDB on srv01:

    thread 1 sequence 67003 (1_67003_987102791.arc)
    thread 1 sequence 67004 (1_67004_987102791.arc)
...
...
...
    thread 1 sequence 67384 (1_67384_987102791.arc)
    thread 1 sequence 67385 (1_67385_987102791.arc)
    Last applied log(s):
    thread 1 sequence 67385

    Next SCN required for recovery 50112484332 generated at 2019-06-05:20:28:24 +02:00.
    Next required log thread 1 sequence 67386

>>> Dbvisit Archive Management Module (AMM)

    Config: number of archives to keep      = 0
    Config: number of days to keep archives = 3
    Config: diskspace full threshold        = 80%
==========

Processing /u03/app/oracle/dbvisit_arch/MyDB...
    Archive log dir: /u03/app/oracle/dbvisit_arch/MyDB
    Total number of archive files   : 383
    Number of archive logs deleted = 0
    Current Disk percent full       : 8%

=============================================================
dbvctl ended on srv01: Wed Jun  5 21:16:30 2019
=============================================================
Running a gap report
oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 44143)
dbvctl started on srv03: Wed Jun  5 21:17:03 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906052117:
-------------------------------------------------------------
Destination database on srv01 is at sequence: 67385.
Source database on srv03 is at log sequence: 67387.
Source database on srv03 is at archived log sequence: 67386.
Dbvisit Standby last transfer log sequence: 67385.
Dbvisit Standby last transfer at: 2019-06-05 20:42:05.

Archive log gap for thread 1:  1.
Transfer log gap for thread 1: 1.
Standby database time lag (DAYS-HH:MI:SS): +00:48:41.
Switchover to srv01

Now we are having both srv01 and srv02 standby databases up and running and connected with current srv03 primary database. Let’s switchover to srv01 and see what would be the required steps. After each switchover the other standby DDC configuration files will have to be manually updated.

Checking srv03 and srv02 are synchronized

Both srv03 and srv02 databases should be in sync otherwise ship and apply archive logs.

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 93307)
dbvctl started on srv03: Wed Jun  5 21:27:02 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906052127:
-------------------------------------------------------------
Destination database on srv02 is at sequence: 67386.
Source database on srv03 is at log sequence: 67387.
Source database on srv03 is at archived log sequence: 67386.
Dbvisit Standby last transfer log sequence: 67386.
Dbvisit Standby last transfer at: 2019-06-05 21:24:47.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:27:02.


=============================================================
dbvctl ended on srvxdb03: Wed Jun  5 21:27:08 2019
=============================================================
Checking srv03 and srv01 are synchronized

Both srv03 and srv01 databases should be in sync otherwise ship and apply archive logs.

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 90871)
dbvctl started on srv03: Wed Jun  5 21:26:31 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906052126:
-------------------------------------------------------------
Destination database on srv01 is at sequence: 67386.
Source database on srv03 is at log sequence: 67387.
Source database on srv03 is at archived log sequence: 67386.
Dbvisit Standby last transfer log sequence: 67386.
Dbvisit Standby last transfer at: 2019-06-05 21:26:02.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:26:02.
Switchover to srv01
oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 20334)
dbvctl started on srv03: Wed Jun  5 21:31:56 2019
=============================================================

>>> Starting Switchover between srv03 and srv01

Running pre-checks       ... done
Pre processing           ... done
Processing primary       ... done
Processing standby       ... done
Converting standby       ... done
Converting primary       ... done
Completing               ... done
Synchronizing            ... done
Post processing          ... done

>>> Graceful switchover completed.
    Primary Database Server: srv01
    Standby Database Server: srv03

>>> Dbvisit Standby can be run as per normal:
    dbvctl -d MyDBSTD2


PID:20334
TRACE:20334_dbvctl_switchover_MyDBSTD2_201906052131.trc

=============================================================
dbvctl ended on srv03: Wed Jun  5 21:37:40 2019
=============================================================
Attach srv02 to srv01 (new primary)

Previously to the switchover :

  • srv03 and srv01 was using MyDBSTD2 DDC configuration file
  • srv03 and srv02 was using MyDBSTD1 DDC configuration file

srv02 standby database needs now to be attach to new primary srv01. For this we will copy the MyDBSTD1 DDC configuration file from srv02 to srv01 as it is the first time srv01 is primary. Otherwise, we would only need to update accordingly the already existing file.

I have been transferring the DDC file :

oracle@srv02:/u01/app/dbvisit/standby/conf/ [MyDB] scp dbv_MyDBSTD1.env oracle@srv01:$PWD
dbv_MyDBSTD1.env    100%   23KB  14.8MB/s   00:00

MyDBSTD1 configuration file has been updated accordingly to reflect new changes and configuration :

  • SOURCE needs to be replaced from srv03 to srv01
  • DESTINATION will remain srv02
  • DB_UNIQUE_NAME needs to be replaced fromMyDB_03 to MyDB_01
  • DB_UNIQUE_NAME_DR will remain MyDB_02
oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] vi dbv_MyDBSTD1.env

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] grep ^SOURCE dbv_MyDBSTD1.env
SOURCE = srv01

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] grep DB_UNIQUE_NAME dbv_MyDBSTD1.env
# DB_UNIQUE_NAME      - Primary database db_unique_name
DB_UNIQUE_NAME = MyDB_01
# DB_UNIQUE_NAME_DR   - Standby database db_unique_name
DB_UNIQUE_NAME_DR = MyDB_02
Checking that databases are all synchronized

After performing several switch logfile on the primary in order to generate archive logs, I transferred and applied needed archive log files on both srv02 and srv03 standby databases. I made sure both are synchronized.

srv01 and srv03 databases :

oracle@srv01:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 98156)
dbvctl started on srv01: Wed Jun  5 21:52:08 2019
=============================================================


Dbvisit Standby log gap report for MyDB_01 thread 1 at 201906052152:
-------------------------------------------------------------
Destination database on srv03 is at sequence: 67413.
Source database on srv01 is at log sequence: 67414.
Source database on srv01 is at archived log sequence: 67413.
Dbvisit Standby last transfer log sequence: 67413.
Dbvisit Standby last transfer at: 2019-06-05 21:51:13.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:00.


=============================================================
dbvctl ended on srv01: Wed Jun  5 21:52:18 2019
=============================================================

srv01 and srv02 databases :

oracle@srv01:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 100393)
dbvctl started on srv01: Wed Jun  5 21:56:06 2019
=============================================================


Dbvisit Standby log gap report for MyDB_01 thread 1 at 201906052156:
-------------------------------------------------------------
Destination database on srv02 is at sequence: 67413.
Source database on srv01 is at log sequence: 67414.
Source database on srv01 is at archived log sequence: 67413.
Dbvisit Standby last transfer log sequence: 67413.
Dbvisit Standby last transfer at: 2019-06-05 21:55:22.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:05:13.


=============================================================
dbvctl ended on srv01: Wed Jun  5 21:56:07 2019
=============================================================
Apply delay lag

MyDBSTD2 configuration should at the end have an apply lag of 24 hours. This can be achieved using APPLY_DELAY_LAG_MINUTES in the configuration. In order to test it, I have decided with customer to use 60 minutes delay.

Update MyDBSTD2 DDC configuration file

Following parameters have been updated in the configuration :
APPLY_DELAY_LAG_MINUTES = 60
DMN_MONITOR_INTERVAL_DR = 0
TRANSFER_LOG_GAP_THRESHOLD = 0
ARCHIVE_LOG_GAP_THRESHOLD = 60

APPLY_DELAY_LAG_MINUTES is the delay in minutes to take in account before applying the vector changes.
DMN_MONITOR_INTERVAL_DR is the interval in sec for log monitor schedule on destination. 0 mean deactivated.
TRANSFER_LOG_GAP_THRESHOLD is the difference allowed between the last archived sequence on the primary and the last sequence transferred to the standby server.
ARCHIVE_LOG_GAP_THRESHOLD is the difference allowed between the last archived sequence on the primary and the last applied sequence on the standby database before an alert is sent.

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] cp dbv_MyDBSTD2.env dbv_MyDBSTD2.env.201906131343

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] vi dbv_MyDBSTD2.env

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] diff dbv_MyDBSTD2.env dbv_MyDBSTD2.env.201906131343
281c281
DMN_MONITOR_INTERVAL_DR = 0
---
DMN_MONITOR_INTERVAL_DR = 5
331c331
APPLY_DELAY_LAG_MINUTES = 60
---
APPLY_DELAY_LAG_MINUTES = 0
374c374
ARCHIVE_LOG_GAP_THRESHOLD = 60
---
ARCHIVE_LOG_GAP_THRESHOLD = 0

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] grep ^TRANSFER_LOG_GAP_THRESHOLD dbv_MyDBSTD2.env
TRANSFER_LOG_GAP_THRESHOLD = 0
Report displayed with an apply delay lag been configured

When generating a report, we can see that there is no gap in the log transfer as the archive log would be transferred through the crontab every 10 minutes. On the other side, we can see that there is an expected delay of 60 minutes in applying the logs.

oracle@srv03:/u01/app/dbvisit/standby/ [MyDBTEST] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 66003)
dbvctl started on srv03: Thu Jun 13 15:21:29 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906131521:
-------------------------------------------------------------
Destination database on srv01 is at sequence: 73856.
Source database on srv03 is at log sequence: 73890.
Source database on srv03 is at archived log sequence: 73889.
Dbvisit Standby last transfer log sequence: 73889.
Dbvisit Standby last transfer at: 2019-06-13 15:20:15.

Archive log gap for thread 1:  33 (apply_delay_lag_minutes=60).
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +01:00:00.


=============================================================
dbvctl ended on srv03: Thu Jun 13 15:21:35 2019
=============================================================
Cascading standby database

What about cascading standby database? Cascading standby database is possible with dbvisit. We would be using a cascaded standby for a reporting server that needs to be updated less frequently or if we would like to unload the primary database in sending archive logs to multiple standby databases. The cascaded standby database will remain updated through the first standby. Cascading is possible since dbvisit version 8.

Following needs to be known :

  • Switchover will not be possible between the primary and the cascaded standby database.
  • The DDC configuration file between the first standby and the cascaded standby needs to have :
    • As SOURCE the first standby database
    • CASCADE parameter set to Y. This will be done automatically when creating the DDC configuration with dbvctl -o setup. From the traces you will see : >>> Source database is a standby database. CASCADE flag will be turned on.
    • ARCHDEST and ARCHSOURCE location on the first standby needs to have same values.

    The principle is then exactly the same, and running dbvctl -d from the first standby will ship the archive log to the second standby.

I had been running some tests in my lab.

Environment

DBVP is the primary server.
DBVS is the first standby server.
DBVS2 is the second cascaded server.

oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] DBVPDB
********* dbi services Ltd. *********
STATUS                 : OPEN
DB_UNIQUE_NAME         : DBVPDB_SITE1
OPEN_MODE              : READ WRITE
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PRIMARY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
VERSION                : 12.2.0.1.0
CDB Enabled            : NO
*************************************

oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] DBVPDB
********* dbi services Ltd. *********
STATUS                 : MOUNTED
DB_UNIQUE_NAME         : DBVPDB_SITE2
OPEN_MODE              : MOUNTED
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PHYSICAL STANDBY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
CDB Enabled            : NO
*************************************


oracle@DBVS2:/u01/app/dbvisit/standby/ [DBVPDB] DBVPDB
********* dbi services Ltd. *********
STATUS                 : MOUNTED
DB_UNIQUE_NAME         : DBVPDB_SITE3
OPEN_MODE              : MOUNTED
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PHYSICAL STANDBY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
CDB Enabled            : NO
*************************************
Create cascaded DDC configuration file

The DDC configuration file will be created from the first standby node.
DBVS (first standby server) will be the SOURCE.
DBVS2 (cascaded standby server) will be the DESTINATION.

oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -o setup


=========================================================

     Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b)
           http://www.dbvisit.com

=========================================================

=>dbvctl only needs to be run on the primary server.

Is this the primary server?  [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:

     DDC
     ===
1)   Create New DDC
2)   DBVPDB
3)   DBVPDB_SITE1
4)   DBVPOMF_SITE1
5)   Cancel

Please enter choice [] : 1

Is this correct?  [Yes]:

...


Continue ?  [No]: yes

=========================================================
Dbvisit Standby setup begins.
=========================================================
The following Oracle instance(s) have been found on this server:

     SID            ORACLE_HOME
     ===            ===========
1)   rdbms12201     /u01/app/oracle/product/12.2.0/dbhome_1
2)   DBVPDB         /u01/app/oracle/product/12.2.0/dbhome_1
3)   DBVPOMF        /u01/app/oracle/product/12.2.0/dbhome_1
4)   DUP            /u01/app/oracle/product/12.2.0/dbhome_1
5)   Enter own ORACLE_SID and ORACLE_HOME
Please enter choice [] : 2

Is this correct?  [Yes]:
=>ORACLE_SID will be: DBVPDB
=>ORACLE_HOME will be: /u01/app/oracle/product/12.2.0/dbhome_1

>>> Source database is a standby database. CASCADE flag will be turned on.

Yes to continue or No to cancel setup?  [Yes]:

...
...
...

Below are the list of configuration variables provided during the setup process:

Configuration Variable             Value Provided
======================             ==============
ORACLE_SID                         DBVPDB
ORACLE_HOME                        /u01/app/oracle/product/12.2.0/dbhome_1

SOURCE                             DBVS
ARCHSOURCE                         /u90/dbvisit_arch/DBVPDB_SITE2
RAC_DR                             N
USE_SSH                            Y
DESTINATION                        DBVS2
NETPORT                            22
DBVISIT_BASE_DR                    /oracle/u01/app/dbvisit
ORACLE_HOME_DR                     /u01/app/oracle/product/12.2.0/dbhome_1
DB_UNIQUE_NAME_DR                  DBVPDB_SITE3
ARCHDEST                           /u90/dbvisit_arch/DBVPDB_SITE3
ORACLE_SID_DR                      DBVPDB
ENV_FILE                           DBVPDB_CASCADED

Are these variables correct?  [Yes]:

>>> Dbvisit Database configuration (DDC) file DBVPDB_CASCADED created.

>>> Dbvisit Database repository (DDR) already installed.
   Repository Version          8.3
   Software Version            8.3
   Repository Status           VALID


Do you want to enter license key for the newly created Dbvisit Database configuration (DDC) file?  [Yes]:

Enter license key and press Enter: []: 4jo6z-8aaai-u09b6-ijjxe-cxks5-1114a-ozfvp
oracle@dbvs2's password:
>>> Dbvisit Standby License
License Key     : 4jo6z-8aaai-u09b6-ijjxe-cxks5-1114a-ozfvp
customer_number : 1
dbname          :
expiry_date     : 2019-05-29
product_id      : 8
sequence        : 1
status          : VALID
updated         : YES

PID:25571
TRACE:dbvisit_install.log

dbvisit software could see that the SOURCE is already a standby database. The software will then automatically configured the CASCADE flag to Y.

>>> Source database is a standby database. CASCADE flag will be turned on.
oracle@DBVS:/u01/app/dbvisit/standby/conf/ [DBVPDB] grep CASCADE dbv_DBVPDB_CASCADED.env
# Variable: CASCADE
#      CASCADE = Y
CASCADE = Y
Synchronize first standby with primary Ship archive log from primary to first standby
oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 23506)
dbvctl started on DBVP: Wed May 15 01:24:55 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 3. Transfer log gap: 3
>>> Transferring Log file(s) from DBVPDB on DBVP to DBVS for thread 1:

    thread 1 sequence 50 (o1_mf_1_50_gfpmk7sg_.arc)
    thread 1 sequence 51 (o1_mf_1_51_gfpmkc7p_.arc)
    thread 1 sequence 52 (o1_mf_1_52_gfpmkf7w_.arc)

=============================================================
dbvctl ended on DBVP: Wed May 15 01:25:06 2019
=============================================================
Apply archive log on first standby
oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 27769)
dbvctl started on DBVS: Wed May 15 01:25:25 2019
=============================================================


>>> Applying Log file(s) from DBVP to DBVPDB on DBVS:

>>> No new logs to apply.
    Last applied log(s):
    thread 1 sequence 52

    Next SCN required for recovery 885547 generated at 2019-05-15:01:24:29 +02:00.
    Next required log thread 1 sequence 53

=============================================================
dbvctl ended on DBVS: Wed May 15 01:25:27 2019
=============================================================
Run a gap report
oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB -i
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 23625)
dbvctl started on DBVP: Wed May 15 01:25:55 2019
=============================================================


Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201905150125:
-------------------------------------------------------------
Destination database on DBVS is at sequence: 52.
Source database on DBVP is at log sequence: 53.
Source database on DBVP is at archived log sequence: 52.
Dbvisit Standby last transfer log sequence: 52.
Dbvisit Standby last transfer at: 2019-05-15 01:25:06.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:33.


=============================================================
dbvctl ended on DBVP: Wed May 15 01:25:58 2019
=============================================================
Synchronize cascaded standby with first standby Ship archive log from first standby to cascaded standby
oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_CASCADED
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 27965)
dbvctl started on DBVS: Wed May 15 01:26:41 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 3. Transfer log gap: 3
>>> Transferring Log file(s) from DBVPDB on DBVS to DBVS2 for thread 1:

    thread 1 sequence 50 (1_50_979494498.arc)
    thread 1 sequence 51 (1_51_979494498.arc)
    thread 1 sequence 52 (1_52_979494498.arc)

=============================================================
dbvctl ended on DBVS: Wed May 15 01:26:49 2019
=============================================================
Apply archive log on cascaded standby
oracle@DBVS2:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_CASCADED
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 21118)
dbvctl started on DBVS2: Wed May 15 01:27:21 2019
=============================================================


>>> Applying Log file(s) from DBVS to DBVPDB on DBVS2:

    thread 1 sequence 50 (1_50_979494498.arc)
    thread 1 sequence 51 (1_51_979494498.arc)
    thread 1 sequence 52 (1_52_979494498.arc)
    Last applied log(s):
    thread 1 sequence 52

    Next SCN required for recovery 885547 generated at 2019-05-15:01:24:29 +02:00.
    Next required log thread 1 sequence 53

=============================================================
dbvctl ended on DBVS2: Wed May 15 01:27:33 2019
=============================================================
Run a gap report
oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_CASCADED -i
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 28084)
dbvctl started on DBVS: Wed May 15 01:28:07 2019
=============================================================


Dbvisit Standby log gap report for DBVPDB_SITE2 thread 1 at 201905150128:
-------------------------------------------------------------
Destination database on DBVS2 is at sequence: 52.
Source database on DBVS is at applied log sequence: 52.
Dbvisit Standby last transfer log sequence: 52.
Dbvisit Standby last transfer at: 2019-05-15 01:26:49.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:00.


=============================================================
dbvctl ended on DBVS: Wed May 15 01:28:11 2019
=============================================================
Conclusion

With dbvisit we are able to configure several standby databases, choose apply lag delay and also configure cascaded standby. The cons would be that the DDC configuration file needs to be manually adapted after each switchover.

Cet article Having multiple standby databases and cascading with dbvisit est apparu en premier sur Blog dbi services.

Why you really should use peer authentication in PostgreSQL

Yann Neuhaus - Tue, 2019-10-22 14:36

It is always a bit of a surprise that many people do not know peer authentication in PostgreSQL. You might ask why that is important as initdb creates a default pg_hba.conf which does not allow any connections from outside the PostgreSQL server. While that is true there is another important point to consider.

Let’s assume you executed initdb without any options like this:

postgres@centos8pg:/home/postgres/ [pgdev] mkdir /var/tmp/test
postgres@centos8pg:/home/postgres/ [pgdev] initdb -D /var/tmp/test
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/tmp/test ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Europe/Zurich
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

pg_ctl -D /var/tmp/test -l logfile start

Did you ever notice the warning at the end of the output?

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

You might think that this is not important as only the DBAs will have access to the operating system user postgres (or whatever user you used when you executed initdb). Although this might be true in your case, the server eventually might have other local users. Before creating a new user lets start the instance:

postgres@centos8pg:/home/postgres/ [pgdev] export PGPORT=9999
postgres@centos8pg:/home/postgres/ [pgdev] pg_ctl -D /var/tmp/test/ start -l /dev/null
waiting for server to start.... done
server started

You really need to be aware of is this:

postgres@centos8pg:/home/postgres/ [pgdev] sudo useradd test
postgres@centos8pg:/home/postgres/ [pgdev] sudo su - test
[test@centos8pg ~]$ /u01/app/postgres/product/DEV/db_1/bin/psql -p 9999 -U postgres postgres
psql (13devel)
Type "help" for help.

postgres=#

… and you are in as the superuser! So any local user can connect as the superuser by default. What you might want to do is this:

postgres@centos8pg:/home/postgres/ [pgdev] sudo chmod o-rwx /u01/app/postgres/product
postgres@centos8pg:/home/postgres/ [pgdev] sudo su - test
Last login: Tue Oct 22 21:19:58 CEST 2019 on pts/0
[test@centos8pg ~]$ /u01/app/postgres/product/DEV/db_1/bin/psql -p 9999 -U postgres postgres
-bash: /u01/app/postgres/product/DEV/db_1/bin/psql: Permission denied

This prevents all other users on the system from executing the psql binary. If you can guarantee that nobody installs psql in another way on the system that might be sufficient. As soon as psql is available somewhere on the system you’re lost again:

postgres@centos8pg:/home/postgres/ [pgdev] sudo dnf provides psql
Last metadata expiration check: 0:14:53 ago on Tue 22 Oct 2019 09:09:23 PM CEST.
postgresql-10.6-1.module_el8.0.0+15+f57f353b.x86_64 : PostgreSQL client programs
Repo        : AppStream
Matched from:
Filename    : /usr/bin/psql

postgres@centos8pg:/home/postgres/ [pgdev] sudo dnf install -y postgresql-10.6-1.module_el8.0.0+15+f57f353b.x86_64
[test@centos8pg ~]$ /usr/bin/psql -p 9999 -U postgres -h /tmp postgres
psql (10.6, server 13devel)
WARNING: psql major version 10, server major version 13.
Some psql features might not work.
Type "help" for help.

postgres=#

Not really an option. This is where peer authentication becomes very handy.

postgres@centos8pg:/home/postgres/ [pgdev] sed -i 's/local   all             all                                     trust/local   all             all                                     peer/g' /var/tmp/test/pg_hba.conf

Once you switched from trust to peer for local connections only the operating system user that created the instance will be able to connect locally without providing a password:

postgres@centos8pg:/home/postgres/ [pgdev] pg_ctl -D /var/tmp/test/ reload
server signaled
postgres@centos8pg:/home/postgres/ [pgdev] psql postgres
psql (13devel)
Type "help" for help.

[local]:9999 postgres@postgres=#

Other local users will not be able to connect anymore:

postgres@centos8pg:/home/postgres/ [pgdev] sudo su - testLast login: Tue Oct 22 21:25:36 CEST 2019 on pts/0
[test@centos8pg ~]$ /usr/bin/psql -p 9999 -U postgres -h /tmp postgres
psql: FATAL:  Peer authentication failed for user "postgres"
[test@centos8pg ~]$

So, please, consider enabling peer authentication or at least go for md5 for local connections as well.

Cet article Why you really should use peer authentication in PostgreSQL est apparu en premier sur Blog dbi services.

ANSI Plans

Jonathan Lewis - Tue, 2019-10-22 12:59

Here’s a thought that falls somewhere between philosophical and pragmatic. It came up while I was playing around with a problem from the Oracle database forum that was asking about options for rewriting a query with a certain type of predicate. This note isn’t really about that question but the OP supplied a convenient script to demonstrate their requirement and I’ve hi-jacked most of the code for my own purposes so that I can ask the question:

Should the presence of an intermediate view name generated by the optimizer in the course of cost-based query transformation cause two plans, which are otherwise identical and do exactly the same thing, to have different plan hash values ?

To demonstrate the issue let’s start with a simple script to create some data and generate an execution plan.


rem
rem     Script:         or_expand_plans.sql
rem     Author:         Jonathan Lewis
rem     Dated           Oct 2019
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem
rem     Notes:
rem     Data creation copied from ODC
rem     https://community.oracle.com/thread/4297365
rem

create table t as   
select 1 as id, to_date('2019-10-11', 'YYYY-MM-DD') as lastupdated, 'Toronto' as place from dual  
union all  
select 2 as id, to_date('2019-10-12', 'YYYY-MM-DD') as lastupdated, 'Toronto' as place from dual  
union all  
select 3 as id, to_date('2019-10-15', 'YYYY-MM-DD') as lastupdated, 'Toronto' as place from dual  
union all  
select 4 as id, to_date('2019-10-21', 'YYYY-MM-DD') as lastupdated, 'Toronto' as place from dual  
;  

create table t_others as  
select 1 as id, to_date('2019-10-12', 'YYYY-MM-DD') as lastupdated, 'Blue'  as color, 'Zone 7' as zoneid from dual  
union all  
select 2 as id, to_date('2019-10-21', 'YYYY-MM-DD') as lastupdated, 'Red'   as color, 'Zone 7' as zoneid from dual  
union all  
select 3 as id, to_date('2019-10-16', 'YYYY-MM-DD') as lastupdated, 'White' as color, 'Zone 7' as zoneid from dual  
union all  
select 4 as id, to_date('2019-10-17', 'YYYY-MM-DD') as lastupdated, 'Green' as color, 'Zone 7' as zoneid from dual  
;  

create table t_further_info as  
select 1 as id, to_date('2019-10-12', 'YYYY-MM-DD') as lastupdated, 'A' as typeinfo from dual  
union all   
select 2 as id, to_date('2019-10-14', 'YYYY-MM-DD') as lastupdated, 'C' as typeinfo from dual  
union all  
select 3 as id, to_date('2019-10-21', 'YYYY-MM-DD') as lastupdated, 'D' as typeinfo from dual  
union all  
select 4 as id, to_date('2019-10-21', 'YYYY-MM-DD') as lastupdated, 'E' as typeinfo from dual  
;  

prompt  ====================
prompt  "Traditional" syntax
prompt  ====================

explain plan for
select 
        /*+ or_expand */ 
        * 
from
        t,
        t_others        pt,
        t_further_info  fi  
/*
where   (
             t.lastupdated >= to_date('2019-10-21', 'YYYY-MM-DD')   
         or pt.lastupdated >= to_date('2019-10-21', 'YYYY-MM-DD')  
         or fi.lastupdated >= to_date('2019-10-21', 'YYYY-MM-DD')  
        )  
*/
where   to_date('2019-10-21', 'YYYY-MM-DD') <= any(t.lastupdated, pt.lastupdated, fi.lastupdated)   
and     pt.id = t.id  
and     fi.id = t.id  
;

select * from table(dbms_xplan.display(null,null,'outline'));


You’ll see that I have a simple three-table join with the nasty little detail that I have a “non-join” predicates that may require Oracle to check across all three tables before it can decide whether or not a row should be discarded. I’ve shown two variants on a theme – they both have exactly the same effect but the ANY() presentation is just a little bit neater and more compact.

Essentially Oracle can use one of two strategies for this type of query/predicate; the first is to join all three tables and wait until the final join rowsource appears and then apply the check, or it can split the query into a union all of three separate queries where each query drives off a different table selecting only the rows from that table that match “its” part of the predicate.

In the latter case the second and third branches of the union all have to be modified to ensure that they discard any rows already returned by preceding parts of the union all; this can mean lots of new predicates appearing that use the lnnvl() function. (Of course there are variants between these two extremes, but the general principle is unchanged.)

The presence of the (incomplete) /*+ or_expand */ hint in my query is there to tell the optimizer that it should attempt to transform the query into the three-part union all. This, by the way, is a 12c feature, though older versions of Oracle could get similar effects in some cases from the /*+ use_concat */ hint. Here’s the plan, with outline,  I got from 12.2.0.1:


Plan hash value: 3181357500

----------------------------------------------------------------------------------------------
| Id  | Operation               | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                    |     3 |   270 |    17   (6)| 00:00:01 |
|   1 |  VIEW                   | VW_ORE_67EF6547    |     3 |   270 |    17   (6)| 00:00:01 |
|   2 |   UNION-ALL             |                    |       |       |            |          |
|*  3 |    HASH JOIN            |                    |     1 |    55 |     6   (0)| 00:00:01 |
|   4 |     MERGE JOIN CARTESIAN|                    |     4 |   144 |     4   (0)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL  | T_FURTHER_INFO     |     1 |    13 |     2   (0)| 00:00:01 |
|   6 |      BUFFER SORT        |                    |     3 |    69 |     2   (0)| 00:00:01 |
|*  7 |       TABLE ACCESS FULL | T_OTHERS           |     3 |    69 |     2   (0)| 00:00:01 |
|*  8 |     TABLE ACCESS FULL   | T                  |     3 |    57 |     2   (0)| 00:00:01 |
|*  9 |    HASH JOIN            |                    |     2 |   182 |    11  (10)| 00:00:01 |
|  10 |     VIEW                | VW_JF_SET$FB5125FC |     2 |   156 |     9  (12)| 00:00:01 |
|  11 |      UNION-ALL          |                    |       |       |            |          |
|* 12 |       HASH JOIN         |                    |     1 |    42 |     4   (0)| 00:00:01 |
|* 13 |        TABLE ACCESS FULL| T_OTHERS           |     1 |    23 |     2   (0)| 00:00:01 |
|* 14 |        TABLE ACCESS FULL| T                  |     3 |    57 |     2   (0)| 00:00:01 |
|* 15 |       HASH JOIN         |                    |     1 |    42 |     4   (0)| 00:00:01 |
|* 16 |        TABLE ACCESS FULL| T                  |     1 |    19 |     2   (0)| 00:00:01 |
|  17 |        TABLE ACCESS FULL| T_OTHERS           |     4 |    92 |     2   (0)| 00:00:01 |
|  18 |     TABLE ACCESS FULL   | T_FURTHER_INFO     |     4 |    52 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      USE_HASH(@"SEL$7C4216F7" "PT"@"SEL$1")
      LEADING(@"SEL$7C4216F7" "T"@"SEL$1" "PT"@"SEL$1")
      FULL(@"SEL$7C4216F7" "PT"@"SEL$1")
      FULL(@"SEL$7C4216F7" "T"@"SEL$1")
      USE_HASH(@"SEL$A4A33BE0" "T"@"SEL$1")
      LEADING(@"SEL$A4A33BE0" "PT"@"SEL$1" "T"@"SEL$1")
      FULL(@"SEL$A4A33BE0" "T"@"SEL$1")
      FULL(@"SEL$A4A33BE0" "PT"@"SEL$1")
      USE_HASH(@"SET$49E1C21B_3" "T"@"SEL$1")
      USE_MERGE_CARTESIAN(@"SET$49E1C21B_3" "PT"@"SEL$1")
      LEADING(@"SET$49E1C21B_3" "FI"@"SEL$1" "PT"@"SEL$1" "T"@"SEL$1")
      FULL(@"SET$49E1C21B_3" "T"@"SEL$1")
      FULL(@"SET$49E1C21B_3" "PT"@"SEL$1")
      FULL(@"SET$49E1C21B_3" "FI"@"SEL$1")
      USE_HASH(@"SEL$5FCD2D3C" "FI"@"SEL$1")
      LEADING(@"SEL$5FCD2D3C" "VW_JF_SET$FB5125FC"@"SEL$81DF0931" "FI"@"SEL$1")
      FULL(@"SEL$5FCD2D3C" "FI"@"SEL$1")
      NO_ACCESS(@"SEL$5FCD2D3C" "VW_JF_SET$FB5125FC"@"SEL$81DF0931")
      NO_ACCESS(@"SEL$67EF6547" "VW_ORE_67EF6547"@"SEL$67EF6547")
      OUTLINE(@"SET$49E1C21B_2")
      OUTLINE(@"SET$49E1C21B_1")
      OUTLINE(@"SEL$1")
      FACTORIZE_JOIN(@"SET$49E1C21B"("FI"@"SET$49E1C21B_2" "FI"@"SET$49E1C21B_1"))
      OUTLINE(@"SET$0E101D56")
      OUTLINE(@"SEL$81DF0931")
      OUTLINE(@"SEL$5AB42CD1")
      OR_EXPAND(@"SEL$1" (1) (2) (3))
      OUTLINE(@"SET$49E1C21B")
      OUTLINE_LEAF(@"SEL$67EF6547")
      FACTORIZE_JOIN(@"SET$49E1C21B"("FI"@"SET$49E1C21B_2" "FI"@"SET$49E1C21B_1"))
      OUTLINE_LEAF(@"SET$0E101D56")
      OUTLINE_LEAF(@"SEL$5FCD2D3C")
      OUTLINE_LEAF(@"SET$FB5125FC")
      OUTLINE_LEAF(@"SEL$A4A33BE0")
      OUTLINE_LEAF(@"SEL$7C4216F7")
      OUTLINE_LEAF(@"SET$49E1C21B_3")
      ALL_ROWS
      DB_VERSION('12.2.0.1')
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("PT"."ID"="T"."ID" AND "FI"."ID"="T"."ID")
   5 - filter("FI"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
   7 - filter(LNNVL("PT"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss')))
   8 - filter(LNNVL("T"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss')))
   9 - access("FI"."ID"="ITEM_1")
  12 - access("PT"."ID"="T"."ID")
  13 - filter("PT"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
  14 - filter(LNNVL("T"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss')))
  15 - access("PT"."ID"="T"."ID")
  16 - filter("T"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))


This is a wonderful demonstration of how brilliant the optimizer can be.  The query has gone through several transformations and two of them have very high visibility. First, you can see the name VW_ORE_67EF6547 at operation 1. This is a view name that Oracle generates to express (cost-based) OR-Expansion” so the optimizer has clearly obeyed my hint. As a consequence of OR-expansion we can also see several examples of the lnnvl() function appearing in the Predicate Information section of the output.; we can also see the hint re-appearing in the completed form of OR_EXPAND(@”SEL$1″ (1) (2) (3)) in the Outline Data.

However, we don’t have the union all of three pieces that we might have expected; we have a union all of two pieces and the second piece is a hash join between the table t_further_info and a view called VW_JF_SET$FB5125FC. This view is the result of “join factorization”. The optimizer has taken the 2nd and 3rd sections of our union all view and decided that it would be cost-effective to “factor out” a common table, so this:

select from t_others, t, t_further_info ... where t_others.date_predicate ...
union all
select from t, t_others, t_further_info ... where t.date_predicate ...

changes to this:

select from 
        (
        select from t_others, t   ... where t_others.date_predicate ...
        union all
        select from t, t_others   ... where t.date_predicate ...
        ),
        t_further_info 
where   ...

Having said all that, I now have to change the code because the original query was written using “ANSI” style joins – like this:


explain plan for
select
        /*+ or_expand */
        * 
from
        t   
inner join 
        t_others       pt  
on      pt.id = t.id  
inner join 
        t_further_info fi  
on      fi.id = t.id  
where
        (
             t.lastupdated >= to_date('2019-10-21', 'YYYY-MM-DD')   
         or pt.lastupdated >= to_date('2019-10-21', 'YYYY-MM-DD')  
         or fi.lastupdated >= to_date('2019-10-21', 'YYYY-MM-DD')  
        )
;

select  * from table(dbms_xplan.display(null,null,'outline'));


In the ANSI example I happen to have used the explicit “OR” list for the date predicates but that’s not really signficant . Here’s the plan produced by this query – and the first thing I’d like you to note is the Plan hash value:

Plan hash value: 3309788271

----------------------------------------------------------------------------------------------
| Id  | Operation               | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                    |     3 |   270 |    17   (6)| 00:00:01 |
|   1 |  VIEW                   | VW_ORE_31069B60    |     3 |   270 |    17   (6)| 00:00:01 |
|   2 |   UNION-ALL             |                    |       |       |            |          |
|*  3 |    HASH JOIN            |                    |     1 |    55 |     6   (0)| 00:00:01 |
|   4 |     MERGE JOIN CARTESIAN|                    |     4 |   144 |     4   (0)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL  | T_FURTHER_INFO     |     1 |    13 |     2   (0)| 00:00:01 |
|   6 |      BUFFER SORT        |                    |     3 |    69 |     2   (0)| 00:00:01 |
|*  7 |       TABLE ACCESS FULL | T_OTHERS           |     3 |    69 |     2   (0)| 00:00:01 |
|*  8 |     TABLE ACCESS FULL   | T                  |     3 |    57 |     2   (0)| 00:00:01 |
|*  9 |    HASH JOIN            |                    |     2 |   182 |    11  (10)| 00:00:01 |
|  10 |     VIEW                | VW_JF_SET$997549B1 |     2 |   156 |     9  (12)| 00:00:01 |
|  11 |      UNION-ALL          |                    |       |       |            |          |
|* 12 |       HASH JOIN         |                    |     1 |    42 |     4   (0)| 00:00:01 |
|* 13 |        TABLE ACCESS FULL| T_OTHERS           |     1 |    23 |     2   (0)| 00:00:01 |
|* 14 |        TABLE ACCESS FULL| T                  |     3 |    57 |     2   (0)| 00:00:01 |
|* 15 |       HASH JOIN         |                    |     1 |    42 |     4   (0)| 00:00:01 |
|* 16 |        TABLE ACCESS FULL| T                  |     1 |    19 |     2   (0)| 00:00:01 |
|  17 |        TABLE ACCESS FULL| T_OTHERS           |     4 |    92 |     2   (0)| 00:00:01 |
|  18 |     TABLE ACCESS FULL   | T_FURTHER_INFO     |     4 |    52 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      USE_HASH(@"SEL$D12FC97A" "PT"@"SEL$1")
      LEADING(@"SEL$D12FC97A" "T"@"SEL$1" "PT"@"SEL$1")
      FULL(@"SEL$D12FC97A" "PT"@"SEL$1")
      FULL(@"SEL$D12FC97A" "T"@"SEL$1")
      USE_HASH(@"SEL$09C9729D" "T"@"SEL$1")
      LEADING(@"SEL$09C9729D" "PT"@"SEL$1" "T"@"SEL$1")
      FULL(@"SEL$09C9729D" "T"@"SEL$1")
      FULL(@"SEL$09C9729D" "PT"@"SEL$1")
      USE_HASH(@"SET$E8D85587_3" "T"@"SEL$1")
      USE_MERGE_CARTESIAN(@"SET$E8D85587_3" "PT"@"SEL$1")
      LEADING(@"SET$E8D85587_3" "FI"@"SEL$2" "PT"@"SEL$1" "T"@"SEL$1")
      FULL(@"SET$E8D85587_3" "T"@"SEL$1")
      FULL(@"SET$E8D85587_3" "PT"@"SEL$1")
      FULL(@"SET$E8D85587_3" "FI"@"SEL$2")
      USE_HASH(@"SEL$95B99BAF" "FI"@"SEL$2")
      LEADING(@"SEL$95B99BAF" "VW_JF_SET$997549B1"@"SEL$BB7F1ECF" "FI"@"SEL$2")
      FULL(@"SEL$95B99BAF" "FI"@"SEL$2")
      NO_ACCESS(@"SEL$95B99BAF" "VW_JF_SET$997549B1"@"SEL$BB7F1ECF")
      NO_ACCESS(@"SEL$31069B60" "VW_ORE_31069B60"@"SEL$31069B60")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$58A6D7F6")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SET$E8D85587_2")
      OUTLINE(@"SET$E8D85587_1")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$9E43CB6E")
      FACTORIZE_JOIN(@"SET$E8D85587"("FI"@"SET$E8D85587_2" "FI"@"SET$E8D85587_1"))
      OUTLINE(@"SET$6117B24C")
      OUTLINE(@"SEL$BB7F1ECF")
      OUTLINE(@"SEL$344003E3")
      OR_EXPAND(@"SEL$9E43CB6E" (1) (2) (3))
      OUTLINE(@"SET$E8D85587")
      OUTLINE_LEAF(@"SEL$31069B60")
      FACTORIZE_JOIN(@"SET$E8D85587"("FI"@"SET$E8D85587_2" "FI"@"SET$E8D85587_1"))
      OUTLINE_LEAF(@"SET$6117B24C")
      OUTLINE_LEAF(@"SEL$95B99BAF")
      OUTLINE_LEAF(@"SET$997549B1")
      OUTLINE_LEAF(@"SEL$09C9729D")
      OUTLINE_LEAF(@"SEL$D12FC97A")
      OUTLINE_LEAF(@"SET$E8D85587_3")
      ALL_ROWS
      DB_VERSION('12.2.0.1')
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("FI"."ID"="T"."ID" AND "PT"."ID"="T"."ID")
   5 - filter("FI"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
   7 - filter(LNNVL("PT"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss')))
   8 - filter(LNNVL("T"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss')))
   9 - access("FI"."ID"="ITEM_1")
  12 - access("PT"."ID"="T"."ID")
  13 - filter("PT"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
  14 - filter(LNNVL("T"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss')))
  15 - access("PT"."ID"="T"."ID")
  16 - filter("T"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))


Is the plan for the “ANSI” version of the query the same as the plan for the “traditional” version? How carefully have you checked – after simply noting that the two Plan hash values were different.

The plans are the same – in that the mechanics are exactly the same and that ought to be the criterion on which we should judge them. But the hash values are different because of the change from traditional to ANSI syntax. The traditional form of the query starts life with a single query block while the ANSI form is considered to be two query blocks, so the initial ANSI query goes through three stages:


1) select from t1 join t2 join t3

2) select /*+ qb_name(sel$2) */ from (select /* qb_name(sel$1) */ from t1, t2), t3

3) select /*+ qb_name(sel$9E43CB6E) */ from t1, t2, t3

So the query is rapidly transformed to the equivalent traditional syntax but we  now have a query block name of SEL$9E43CB6E instead of SEL$1 that the traditional query (in the absence of a /*+ qb_name() */ hint would have had. This is why you see the difference in the two or_expand() hints in the Outline Data section. One reads: OR_EXPAND(@SEL$1 (1) (2) (3)), the other reads OR_EXPAND(@”SEL$9E43CB6E” (1) (2) (3)), and all the subseqent query block name differences follow on from this initial transformation. (Note: the value “sel$9e43cb6e” is derived from the input query block names of sel$1 and sel$2 that the new query block is derived from)

You may decide that this mismatch isn’t really terribly important. If you’ve modified the code to switch to ANSI style joins then you may be prepared to put in a little extra effort to check the plan in detail to see that it hasn’t changed; but it would be a lot nicer if the hash value wasn’t dependent on generated view names. You may recall that at one time Oracle had problems with plans that used materialized CTEs (“with” subqueries) because the plan hash value was affected by object names like sys_temp_0fd9d6791_dfc12da. The same principle ought, I think, to apply here.

If you don’t mind the ANSI/tradiational switch though, you might find that you’re less happy when you upgrade to 19c, because the same effect appears there too, only it’s worse. Not only do “identical” traditional and ANSI plans have different hash values, they don’t match the values from 12c because the generated name for the join factorization views (VW_JF) change in the upgrade. So if you’re depending on SQL Plan Baselines to reproduce 12c plans on 19c when you upgrade you may find cases where you know the stored baseline is giving you the same plan but Oracle thinks it isn’t and refuses to use it.

tl;dr

Plans which are functionally identical can have different plan hash values because the plans were reached through a different series of tranformations. In particular if you rewrite queries from “traditional” Oracle syntax to “ANSI” syntax you will find cases where the plan doesn’t change but the plan hash value does thanks to a change in the names of views generated by some transformations.

More significantly, if you upgrade from 12c to 19c there are case where the names of views generated by transformations may change, which could cause the optimizer to discard some of your carefully constructed SQL Plan Baselines as “not reproducible”.

Footnote

For reference, if I add the hint /*+ opt_param(‘_optimizer_join_factorization’ ‘false’) */ to the “traditional query then I get the following plan which shows more clearly the three branches that the original query has been split into – each section starting with a different choice for the driving table:


-------------------------------------------------------------------------------------------
| Id  | Operation               | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                 |     3 |   270 |    19   (6)| 00:00:01 |
|   1 |  VIEW                   | VW_ORE_67EF6547 |     3 |   270 |    19   (6)| 00:00:01 |
|   2 |   UNION-ALL             |                 |       |       |            |          |
|*  3 |    HASH JOIN            |                 |     1 |    55 |     7  (15)| 00:00:01 |
|*  4 |     HASH JOIN           |                 |     1 |    32 |     4   (0)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL  | T               |     1 |    19 |     2   (0)| 00:00:01 |
|   6 |      TABLE ACCESS FULL  | T_FURTHER_INFO  |     4 |    52 |     2   (0)| 00:00:01 |
|   7 |     TABLE ACCESS FULL   | T_OTHERS        |     4 |    92 |     2   (0)| 00:00:01 |
|*  8 |    HASH JOIN            |                 |     1 |    55 |     6   (0)| 00:00:01 |
|   9 |     MERGE JOIN CARTESIAN|                 |     4 |   144 |     4   (0)| 00:00:01 |
|* 10 |      TABLE ACCESS FULL  | T_OTHERS        |     1 |    23 |     2   (0)| 00:00:01 |
|  11 |      BUFFER SORT        |                 |     4 |    52 |     2   (0)| 00:00:01 |
|  12 |       TABLE ACCESS FULL | T_FURTHER_INFO  |     4 |    52 |     2   (0)| 00:00:01 |
|* 13 |     TABLE ACCESS FULL   | T               |     3 |    57 |     2   (0)| 00:00:01 |
|* 14 |    HASH JOIN            |                 |     1 |    55 |     6   (0)| 00:00:01 |
|  15 |     MERGE JOIN CARTESIAN|                 |     4 |   144 |     4   (0)| 00:00:01 |
|* 16 |      TABLE ACCESS FULL  | T_FURTHER_INFO  |     1 |    13 |     2   (0)| 00:00:01 |
|  17 |      BUFFER SORT        |                 |     3 |    69 |     2   (0)| 00:00:01 |
|* 18 |       TABLE ACCESS FULL | T_OTHERS        |     3 |    69 |     2   (0)| 00:00:01 |
|* 19 |     TABLE ACCESS FULL   | T               |     3 |    57 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      USE_HASH(@"SET$BB614FD2_3" "T"@"SET$BB614FD2_3")
      USE_MERGE_CARTESIAN(@"SET$BB614FD2_3" "PT"@"SET$BB614FD2_3")
      LEADING(@"SET$BB614FD2_3" "FI"@"SET$BB614FD2_3" "PT"@"SET$BB614FD2_3"
              "T"@"SET$BB614FD2_3")
      FULL(@"SET$BB614FD2_3" "T"@"SET$BB614FD2_3")
      FULL(@"SET$BB614FD2_3" "PT"@"SET$BB614FD2_3")
      FULL(@"SET$BB614FD2_3" "FI"@"SET$BB614FD2_3")
      USE_HASH(@"SET$BB614FD2_2" "T"@"SET$BB614FD2_2")
      USE_MERGE_CARTESIAN(@"SET$BB614FD2_2" "FI"@"SET$BB614FD2_2")
      LEADING(@"SET$BB614FD2_2" "PT"@"SET$BB614FD2_2" "FI"@"SET$BB614FD2_2"
              "T"@"SET$BB614FD2_2")
      FULL(@"SET$BB614FD2_2" "T"@"SET$BB614FD2_2")
      FULL(@"SET$BB614FD2_2" "FI"@"SET$BB614FD2_2")
      FULL(@"SET$BB614FD2_2" "PT"@"SET$BB614FD2_2")
      USE_HASH(@"SET$BB614FD2_1" "PT"@"SET$BB614FD2_1")
      USE_HASH(@"SET$BB614FD2_1" "FI"@"SET$BB614FD2_1")
      LEADING(@"SET$BB614FD2_1" "T"@"SET$BB614FD2_1" "FI"@"SET$BB614FD2_1"
              "PT"@"SET$BB614FD2_1")
      FULL(@"SET$BB614FD2_1" "PT"@"SET$BB614FD2_1")
      FULL(@"SET$BB614FD2_1" "FI"@"SET$BB614FD2_1")
      FULL(@"SET$BB614FD2_1" "T"@"SET$BB614FD2_1")
      NO_ACCESS(@"SEL$49E1C21B" "VW_ORE_67EF6547"@"SEL$67EF6547")
      OUTLINE(@"SEL$1")
      OR_EXPAND(@"SEL$1" (1) (2) (3))
      OUTLINE_LEAF(@"SEL$49E1C21B")
      OUTLINE_LEAF(@"SET$BB614FD2")
      OUTLINE_LEAF(@"SET$BB614FD2_1")
      OUTLINE_LEAF(@"SET$BB614FD2_2")
      OUTLINE_LEAF(@"SET$BB614FD2_3")
      ALL_ROWS
      OPT_PARAM('_optimizer_join_factorization' 'false')
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("PT"."ID"="T"."ID")
   4 - access("FI"."ID"="T"."ID")
   5 - filter("T"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
   8 - access("PT"."ID"="T"."ID" AND "FI"."ID"="T"."ID")
  10 - filter("PT"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
  13 - filter(LNNVL("T"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00',
              'syyyy-mm-dd hh24:mi:ss')))
  14 - access("PT"."ID"="T"."ID" AND "FI"."ID"="T"."ID")
  16 - filter("FI"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
  18 - filter(LNNVL("PT"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00',
              'syyyy-mm-dd hh24:mi:ss')))
  19 - filter(LNNVL("T"."LASTUPDATED">=TO_DATE(' 2019-10-21 00:00:00',
              'syyyy-mm-dd hh24:mi:ss')))

88 rows selected.

Although the “traditional” and “ANSI” plans still show a difference in their Plan hash values when join factorization is blocked, the absence of the join factorization view means that the plan hash values are now consistent between 12c to 19c (the output above came from 19.3.0.0 as you can see in the Outline information).

How to Convert BMP to JPG: 2 Easy Ways!

VitalSoftTech - Tue, 2019-10-22 08:45

Are you looking to convert BMP to JPG? There are plenty of ways you can do that, through software installed in your laptop or through online BMP to JPG converters. We’ll take a look at some of the way you can make a conversion in this article! But first, let’s take a look at what […]

The post How to Convert BMP to JPG: 2 Easy Ways! appeared first on VitalSoftTech.

Categories: DBA Blogs

srvctl config all

Michael Dinh - Tue, 2019-10-22 08:19

Learned something new today and not sure if it’s new feature.

Seems a lot easier to gather clusterware configuration using one command.

Works with srvctl version: 18.0.0.0.0 or higher.

19c

oracle@ol7-19-rac2 ~]$ echo $ORACLE_HOME
/u01/app/19.0.0/grid

[oracle@ol7-19-rac2 ~]$ srvctl -version
srvctl version: 19.0.0.0.0

[oracle@ol7-19-rac2 ~]$ srvctl config all

Oracle Clusterware configuration details
========================================

Oracle Clusterware basic information
------------------------------------
  Operating system          Linux
  Name                      ol7-19-cluster
  Class                     STANDALONE
  Cluster nodes             ol7-19-rac1, ol7-19-rac2
  Version                   19.0.0.0.0
  Groups                    SYSOPER: SYSASM:dba SYSRAC:dba SYSDBA:dba
  OCR locations             +DATA
  Voting disk locations     DATA
  Voting disk file paths    /dev/oracleasm/asm-disk3

Cluster network configuration details
-------------------------------------
  Interface name  Type  Subnet           Classification
  eth1            IPV4  192.168.56.0/24  PUBLIC
  eth2            IPV4  192.168.1.0/24   PRIVATE, ASM

SCAN configuration details
--------------------------

SCAN "ol7-19-scan" details
++++++++++++++++++++++++++
  Name                ol7-19-scan
  IPv4 subnet         192.168.56.0/24
  DHCP server type    static
  End points          TCP:1521

  SCAN listeners
  --------------
  Name              VIP address
  LISTENER_SCAN1    192.168.56.105
  LISTENER_SCAN2    192.168.56.106
  LISTENER_SCAN3    192.168.56.107


ASM configuration details
-------------------------
  Mode             remote
  Password file    +DATA
  SPFILE           +DATA

  ASM disk group details
  ++++++++++++++++++++++
  Name  Redundancy
  DATA  EXTERN

Database configuration details
==============================

Database "ora.cdbrac.db" details
--------------------------------
  Name                ora.cdbrac.db
  Type                RAC
  Version             19.0.0.0.0
  Role                PRIMARY
  Management          AUTOMATIC
  policy
  SPFILE              +DATA
  Password file       +DATA
  Groups              OSDBA:dba OSOPER:oper OSBACKUP:dba OSDG:dba OSKM:dba OSRAC:dba
  Oracle home         /u01/app/oracle/product/19.0.0/dbhome_1
[oracle@ol7-19-rac2 ~]$

18c

[oracle@rac1 Desktop]$ srvctl -version
srvctl version: 18.0.0.0.0

[oracle@rac1 Desktop]$ srvctl config all

Oracle Clusterware configuration details                                        
========================================                                        

Oracle Clusterware basic information                                            
------------------------------------                                            
  Operating system         Linux                                           
  Name                     scan                                            
  Class                    STANDALONE                                      
  Cluster nodes            rac1, rac2                                      
  Version                  18.0.0.0.0                                      
  Groups                   SYSOPER:dba SYSASM:dba SYSRAC:dba SYSDBA:dba    
  Cluster home             /u01/app/18.0.0/grid                            
  OCR locations            +CRS                                            
  Voting disk locations    /dev/asm-disk8, /dev/asm-disk9, /dev/asm-disk7  

Cluster network configuration details                                           
-------------------------------------                                           
  Interface name  Type  Subnet           Classification  
  eth1            IPV4  10.1.1.0/24      PRIVATE, ASM    
  eth0            IPV4  192.168.11.0/24  PUBLIC          

SCAN configuration details                                                      
--------------------------                                                      

SCAN "scan.localdomain" details                                                 
+++++++++++++++++++++++++++++++                                                 
  Name                scan.localdomain  
  IPv4 subnet         192.168.11.0/24   
  DHCP server type    static            
  End points          TCP:1521          

  SCAN listeners                                                                
  --------------                                                                
  Name        VIP address    
  LISTENER    192.168.11.60  


ASM configuration details                                                       
-------------------------                                                       
  Mode             remote  
  Password file    +RAC    
  SPFILE           +RAC    

  ASM disk group details                                                        
  ++++++++++++++++++++++                                                        
  Name  Redundancy  
  CRS   NORMAL      
  DATA  EXTERN      
  FRA   EXTERN      
  RAC   EXTERN      

Database configuration details                                                  
==============================                                                  

Database "ora.uptst.db" details                                                 
-------------------------------                                                 
  Name                ora.uptst.db                                                   
  Type                RAC                                                            
  Version             18.0.0.0.0                                                     
  Role                PRIMARY                                                        
  Management          AUTOMATIC                                                      
  policy                                                                             
  SPFILE              +DATA                                                          
  Password file       +DATA                                                          
  Groups              OSDBA:dba OSOPER:dba OSBACKUP:dba OSDG:dba OSKM:dba OSRAC:dba  
  Oracle home         /u01/app/oracle/product/18.0.0/db_home1                        

Database "ora.uptst2.db" details                                                
--------------------------------                                                
  Name                 ora.uptst2.db                                        
  Type                 RAC                                                  
  Version              12.1.0.2.0                                           
  Role                 PRIMARY                                              
  Management policy    AUTOMATIC                                            
  SPFILE               +DATA                                                
  Password file        +DATA                                                
  Groups               OSDBA:dba OSOPER:dba OSBACKUP:dba OSDG:dba OSKM:dba  
  Oracle home          /u01/app/oracle/product/12.1.0.2_1                   
[oracle@rac1 Desktop]$ 

The Juilliard School and Rutgers Tap Oracle Cloud to Manage Educational Back-office System

Oracle Press Releases - Tue, 2019-10-22 07:00
Press Release
The Juilliard School and Rutgers Tap Oracle Cloud to Manage Educational Back-office System Leading colleges/universities entrust complex global HR, finance and student operations to Oracle Cloud

Redwood Shores, Calif.—Oct 22, 2019

To manage complex environments globally, leading higher education institutions The Juilliard School and Rutgers University are turning to Oracle Cloud. Both institutions, looking to accelerate student success through more efficient operations across multiple schools and campuses, have chosen Oracle Enterprise Resource Planning (ERP) Cloud, Oracle Human Capital Management (HCM) Cloud, and Oracle Student Cloud.

Founded in 1905, The Juilliard School is a world-famous leader in performing arts education. The school’s mission is to provide the highest caliber of artistic education for gifted musicians, dancers, and actors from around the world. Located in New York City, Juilliard offers undergraduate and graduate degrees in dance, drama and music. Beyond its New York campus, Juilliard is now defining new directions in global performing arts education through its new branch campus in Tianjin, China, The Tianjin Juilliard School, and the global K-12 curricula.

Adopting a global operating model created challenges and complexity for Juilliard’s IT staff. Oracle Cloud provides a unified platform that will enable Juilliard to deliver an exceptional experience in both English and Chinese, and importantly, can be managed with existing IT resources. Once deployed, the Oracle Student Cloud solution will support approximately 2,000 Juilliard students across both campuses and 800 faculty and staff using the ERP, Student, and HCM Cloud solutions in New York.

“The opening of The Tianjin Juilliard School will broaden access to Juilliard’s world-class performing arts education across China and Asia,” said Carl Young, CIO of The Juilliard School. “With Oracle Cloud we will be able to provide our campuses in New York and Tianjin with unified, modern, multi-lingual systems that will have a significant impact on information management, which will ultimately help us better serve our students.”

Rutgers is a leading national research university and the state of New Jersey’s preeminent, comprehensive public institution of higher education. Established in 1766, the university is the eighth oldest higher education institution in the United States and home to more than 150 undergraduate majors, more than 400 graduate programs, and 300 research centers and institutes.

The daunting scale of Rutgers—with more than 70,000 students across 29 different schools and colleges at several campus locations—means it faced different, yet similarly formidable, issues of complexity. As part of a bold five-year plan to modernize and unify its key systems, Rutgers chose Oracle Cloud. 

“We strive to provide new and better experiences for our students,” said Michele Norin, senior vice president and chief information officer at Rutgers University. “The size and scale of Rutgers presents many difficulties as we work to manage and develop our systems. Oracle Cloud helps us meet our goals, stay on the forefront of higher education as it evolves, and enable our students to be more successful.”

Oracle Student Cloud is an integral component to an institution’s efforts to reduce educational costs while enhancing student outcomes and success. Oracle SFP Cloud streamlines the financial aid processes and delivers invaluable, data-backed insights into student successes, freeing administrators to focus more on supporting the academic needs of its students. Oracle ERP Cloud provides the financial solutions needed for institutions to enhance productivity, reduce costs and improve controls. Oracle HCM Cloud supports the changing business requirements around talent management and talent acquisition while furthering workforce productivity.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Pages

Subscribe to Oracle FAQ aggregator