Feed aggregator

ILM Clarification

Anthony Shorten - Wed, 2016-11-30 21:35

Lately I have received a lot of partner and customer questions about our ILM capability that we ship with our solutions. Our ILM solution is basically a combined business and technical capability to allow customers to implement cost effective data management capabilities for product transaction tables. These tables grow quickly and the solution allows the site to define their business retention rules as well as implement storage solutions to implement cost savings whilst retaining data appropriately.

There are several aspects of the solution:

  • In built functionality - These are some retention definitions, contained in a Master Configuration record, that you configure as well as some prebuilt algorithms and ILM batch jobs. The prebuilt algorithms are called by the ILM batch jobs to assess the age of a row as well as check for any outstanding related data for ILM enabled objects. There are additional columns added to the ILM enabled objects to help track the age of the record as well as setting flags for the technical aspect of the solutions to use. The retention period defines the ACTIVE period of the data for the business which is typically the period that the business needs fast and update access to the data.
  • New columns - There are two columns added ILM_DATE and ILM_ARCH_SW. The ILM_DATE is the date which is used to determine the age of the row. By default, it is typically set to the creation date for the row but as it is part of the object, implementers can optionally alter this value after it is set to influence the retention period for individual rows. The ILM_ARCH_SW is set to the "N" value by default, indicating the business is using the row. When a row is eligible, in other words, when the ILM_DATE + the retention period configured for the object, the ILM batch jobs assess the row against the ILM Algorithms to determine if any business rules indicate the record is still active. If the business rules indicate nothing in the business is outstanding for the row, the ILM_ARCH_SW is set to the "Y" value. This value effectively tells the system that the business has finished with that row in the ACTIVE period. Conversely, if a business rule indicates the row needs to be retained then the ILM_ARCH_SW is not changed from the "N" value.
  • Technical aspects of the solution - Once ILM_ARCH_SW is set to the "Y" value, the ILM features within the database are used. So there are some licensing aspects apply:
    • Oracle Database Enterprise Edition is needed to support the ILM activities. Other editions do not have support for the features used.
    • The Partitioning option of the Oracle Database Enterprise Edition is required as a minimum requirement. This is used for data group isolation and allowing storage characteristics to be set at the partition level for effective data management.
    • Optionally, it is recommended to license the Oracle Advanced Compression option. This option allows for greater options for cost savings by allowing higher levels of compression to be used a tool to realize further savings. The base compression in Oracle can be used as well but it is limited and not optimized for some activities.
    • Optionally customers can use the free ILM Assistant addon to the database (training for ILM Assistant). This is a web based planning tool, based upon Oracle APEX, that allows DBA's to build different storage scenarios and assess the cost savings of each. It does not implement the scenarios but it will generate some basic partitioning SQL. Generally for Oracle 12c customers, ILM Assistant is not recommended as it does not cover ALL the additional ILM capabilities of that version of the database. Personally, I only tend to recommend it to customers who have different tiered storage solutions, which is not a lot of customers generally.
    • Oracle 12c now includes additional (and free) capabilities built into the database. These are namely Automatic Data Optimization and Heat Maps. These are disabled by default and can be enabled using initialization parameters on your database. The Heat Map tracks the usage profile of the data in your database automatically. The Automatic Data Optimization can use Heat Map information and other information to define and implement rules for data management. That can be as simple as instructions on compression to moving data across partitions based upon your criterion. For example, if the ILM_ARCH_SW is the "Y" value and the data has not been touched in 3 months then compress the data using the OLTP compression in Oracle Advanced Compression. These rules are maintained using the free functionality in Oracle Enterprise Manager or, if you prefer, SQL commands can be used to set policies.
  • Support for storage solutions - Third party hardware based storage solutions (including Oracle's storage solutions) have additional ILM based solutions built at the hardware level. Typically those solutions will be able to be used in an ILM based solution with Oracle. Check with your hardware vendor directly for capabilities in this area.

There are a number of resources that can help you understand ILM more:

Y2038 issue

Tom Kyte - Wed, 2016-11-30 20:46
Hello AskTom team, Would this be an issue with Oracle? https://en.wikipedia.org/wiki/Year_2038_problem Thanks!
Categories: DBA Blogs

Archivelog backups taking too long

Tom Kyte - Wed, 2016-11-30 20:46
I am expriencing slowness in archivelog backups. In the last two days we have seen long running archive log backups despite the archive log generation being very low (less than 1gb every day). On Nov 29th it ran for 11 hours (4 am - 3 pm). Even lvl 0...
Categories: DBA Blogs

Compare varchar column with number

Tom Kyte - Wed, 2016-11-30 20:46
Hi Team, I need to update values for a column in my table to null, which are not numeric & greater than 999. Here is sample table - create table checknum ( col1 varchar2(10), col2 varchar2(10) ); Insert into CHECKSUM (COL1,COL2) valu...
Categories: DBA Blogs

PL SQL Functions that work with Read-Only Access

Tom Kyte - Wed, 2016-11-30 20:46
Tom! I've enjoyed your point of view and the clarity of your interactions for a couple of years now. I?ve been using SQL Developer off and on for years but I just recently started running into more complicated needs and I?m hoping you can point...
Categories: DBA Blogs

Global Temporary table use to avoid execution of same SQL's twice

Tom Kyte - Wed, 2016-11-30 20:46
Hi, I have one procedure which has two OUT ref cursors. First REF CURSOR returns below SQL output - SELECT * from tab1 , tab2 , tab3 , tab4 , tab5 WHERE <<certain join conditions>> AND value IN (SELECT VALUE from...
Categories: DBA Blogs

Oracle Subquery Bug?

Tom Kyte - Wed, 2016-11-30 20:46
Hi all, per random I came to this problem. My PL/SQL-Procedure was parsed without any ERROR, whereby the column "tb_id" itself did not exist in "suh_tb_test_1". (If I use an alias in the subquery then the ERROR is thrown!) Here the (strong s...
Categories: DBA Blogs

Work-Life Balance at Rittman Mead

Rittman Mead Consulting - Wed, 2016-11-30 20:00
Work-Life Balance at Rittman Mead



Rittman Mead has always had a long standing commitment to giving back, not only to the technology industry, but to local and global communities as well.  Recently, Rittman Mead employees have been encouraged to take up to 40 paid hours to participate in community service opportunities.  This year, I chose to use my 40 hours to serve at an orphanage in the Dong Nai province of Vietnam.  The Bien Hoa Center for Supporting and Vocational Training is an orphanage that currently serves 53 children, ranging in age from infants to 16 year olds.

Work-Life Balance at Rittman Mead

Additionally, the Bien Hoa Center was home to my recently adopted son for over 7 years.   So this was a place near and dear to my heart. The orphanage is run by a very attentive staff, who do a great job caring for the kids, despite having very limited resources.   Many of the children are learning english and other useful skills that will serve them well once they leave the orphanage, either through aging out of the program or through adoption.  While we were there, my family and I were able to play with the older children and comfort the babies.  It was a pleasure to see them display such beautiful, wide smiles despite their difficult situations.   Work-Life Balance at Rittman Mead We were also able to deliver a gift donated by many of my generous Rittman Mead colleagues, which included over 60lbs of art supplies, candy and toys.  Despite our consultants being separated across many states, once I posted an opportunity to contribute, gifts just started arriving at our Atlanta office, where I hang my laptop bag.

Work-Life Balance at Rittman Mead

It is truly a pleasure working with such compassionate people and having a management team that values more than just profits.  Caring about the causes that are important to employees is a big part of the Rittman Mead culture. This attitude, coupled with numerous family friendly work events, makes employees feel like more than simply a cog in the wheels of a profit machine. At Rittman Mead, employees are supported in their pursuit of a healthy work-life balance and that is one of the big reasons I am proud to work here.


Categories: BI & Warehousing

SQLcl custom Input prompt and validations

Kris Rice - Wed, 2016-11-30 18:14
Another quick twitter inspired blog post inspired by the SQLcl account itself. Asked and answered: Do you support the ACCEPT command? YES. pic.twitter.com/NchmQ6Eegs — SQLcl (@oraclesqlcl) November 29, 2016 ACCEPT is a great way to get input but validation has to happen elsewhere as there's no hooks to do something like validate a number in a range without running a sql or plsql that does

Delphix Users Panel Webinar

Bobby Durrett's DBA Blog - Wed, 2016-11-30 16:19

Delphix is sponsoring a user webinar on Wednesday from 10 to 11 am California time. I and a couple of other technical Delphix users will take part in a panel discussion. The webinar will give you a good chance to hear about other users’ experience with Delphix and a chance to ask questions. Like any good user meeting it will not be a non-technical sales pitch but it will instead focus on users sharing helpful technical information with other Delphix users.

Noted Oracle expert and Delphix employee Kellyn Pot’Vin-Gorman will also be there facilitating the discussion.

Here is the link to sign up for the free webinar: https://www.delphix.com/resources/webinar/delphix-users-panel

Be there or be square! &#x1f642;

Bobby

Categories: DBA Blogs

Histograms on character strings between 11.2.0.3 and 11.2.0.4

Yann Neuhaus - Wed, 2016-11-30 15:33

In short, when have statistics gathered in 11.2.0.3 but the query is now running with the 11.2.0.4 optimizer, you may have wrong cardinality estimation on histograms, leading to sub-optimal plans.

I had a table with a flag that has two values ‘Y’ and ‘N’ with even distribution between them. It’s a good case for frequency histograms. I had frequency histograms and expected exact cardinality estimation for a WHERE FLAG=’Y’ predicate. But that was not the case: very low estimation leading to very bad execution plan. Because the cardinality estimation was far from what we have in histograms and far from what we would have without histograms, I checked a 10053 trace and this is what I’ve find:
Using prorated density: 0.004557 of col #97 as selectivity of out-of-range/non-existent value pred

This is linear decay because of out-of-range predicate (I’ll show that next Wednesday at UKOUG TECH16) but I don’t expect an out-of-range condition when I provide one of the two values that are in the frequency histogram.

Here is my testcase


SQL> create table DEMO ( flag char);
Table created.
 
SQL> select count(*) from DEMO where flag='Y';
 
COUNT(*)
----------
0
 
SQL> insert into DEMO select 'Y' from xmltable('1 to 100000');
100000 rows created.
 
SQL> insert into DEMO select 'N' from xmltable('1 to 1000');
1000 rows created.
 
SQL> select flag,to_char(ascii(flag),'XX'),count(*) from DEMO group by flag;
 
FLAG TO_CHAR(ASCII(FLAG),'XX') COUNT(*)
---- ------------------------- --------
Y 59 100000
N 4E 1000

100000 rows with ‘Y’ and 1000 rows with ‘N’.

11.2.0.3

I gather statistics in 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.3';
Session altered.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And run a query looking for rows where flag is ‘Y’


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

The estimate is perfect thanks to the histograms.

11.2.0.4

Now the CBO will running in 11.2.0.4 but statistics have not been gathered since 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.4';
Session altered.
 
SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 500 |
-------------------------------------------

Bad estimation here. Were those 500 come from?
It’s not from the histogram that knows that 100K rows have value ‘Y’
Without histograms the estimation would be based on 2 distinct values among 101000 so that would be 50500.
CBO trace says:

Access path analysis for DEMO
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for DEMO[DEMO] SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
 
kkecdn: Single Table Predicate:"DEMO"."FLAG"='Y'
Column (#1):
NewDensity:0.004950, OldDensity:0.000005 BktCnt:101000.000000, PopBktCnt:101000.000000, PopValCnt:2, NDV:2
Column (#1): FLAG(CHARACTER)
AvgLen: 2 NDV: 2 Nulls: 0 Density: 0.004950
Histogram: Freq #Bkts: 2 UncompBkts: 101000 EndPtVals: 2 ActualVal: no
Using density: 0.004950 of col #1 as selectivity of pred having unreasonably low value
Table: DEMO Alias: DEMO
Card: Original: 101000.000000 Rounded: 500 Computed: 500.000000 Non Adjusted: 500.000000

Predicate having unreasonably low value…

Time to look at the histograms.

ENDPOINT_VALUE

You can get the histogram entries. For CHAR datatype (my case here) the endpoint value is hashed through their ASCII representation put in decimal. Or at least with first characters.

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 405650737316592000000000000000000000 4E20202020203A7BB119D5F6000000 N :{��
101000 462766002760475000000000000000000000 59202020202034D998FF0B5AE00000 Y 4٘�
Z�

Looks good. I can see the ‘N’ and ‘Y’ values here. But obviously the CBO sees that as different than ‘Y’.

Let’s gather statistics again (I’m now with 11.2.0.4 CBO):

SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And look at the difference:

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 404999154965717000000000000000000000 4E0000000000181F436C7BBB200000 NCl{�
101000 462114420409600000000000000000000000 590000000000127D2B51B120000000 Y}+Q�

In 11.2.0.3 the characters (‘Y’ is ASCII 0x59) were padded with spaces (ASCII 0x20). But In 11.2.0.4 they are padded with nulls (ASCII 0x00).
This is the reason why it was considered different. The ENDPOINT_VALUE for ‘Y’ calculated by the 11.2.0.4 version of the CDB is different from the one calculated by the 11.2.0.3 dbms_stats.

Now, the estimation is good again:


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

char_value

The ENDPOINT_VALUE function has been described by Jonathan Lewis in Cost-Based Oracle Fundamentals and the script to encode a character string is downloadable rom http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip. I use it (as an inline function because I’m running that in 12c) to show the values calculated from ‘Y’ and ‘N’ as well as the values calculated from same strings right-padded with spaces:


SQL> with function char_value(i_vc varchar2) return number
-- function coming from Jonathan Lewis char_value.sql http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip
is
m_vc varchar2(15) := substr(rpad(i_vc,15,chr(0)),1,15);
m_n number := 0;
begin
for i in 1..15 loop
m_n := m_n + power(256,15-i) * ascii(substr(m_vc,i,1));
end loop;
m_n := round(m_n, -21);
return m_n;
end;
select flag,char_value(flag),cast(flag as char(32)),char_value(cast(flag as char(32))),count(*) from DEMO.DEMO group by flag
/
 
FLAG CHAR_VALUE(FLAG) CAST(FLAGASCHAR(32)) CHAR_VALUE(CAST(FLAGASCHAR(32))) COUNT(*)
---- ------------------------------------ -------------------------------- ------------------------------------ --------
Y 462114420409600000000000000000000000 Y 462766002760475000000000000000000000 100000
N 404999154965717000000000000000000000 N 405650737316592000000000000000000000 1000

Comparing the calculated values with the ENDPOINT_VALUE I had above, we can see that the function has not changed but the input string was padded with spaces before which is not the case anymore.

So what?

Be careful when upgrading from <= 11.2.0.3 to higher version. You should re-gather the statistics (but then read http://blog.dbi-services.com/a-migration-pitfall-with-all-column-size-auto/ as there’s another pitfall). Don't set optimizer_features_enable to lower version as the scope is very wide and many things may be different. It is always better to take the time to focus on the features you want to disable. And never give up. The weirdest situations can be troubleshooted. Oracle Database has lot of ways to understand what happens, from the debugging tools provided with the software, and from the excellent literature about it.

 

Cet article Histograms on character strings between 11.2.0.3 and 11.2.0.4 est apparu en premier sur Blog dbi services.

This customer has great advice for anyone moving to Oracle’s cloud applications

Linda Fishman Hoyle - Wed, 2016-11-30 14:20
“Talking to other customers has made more of a difference to us than anything else.” ─ Katie Eure, Accounting Manager, Alex Lee, Inc.

No matter how many times we say something, it means a lot more coming from a customer. That’s why we’re so appreciative of customers like Alex Lee, Inc., who are willing to do the talking for us. This $3 billion, privately-owned retail grocery store and food distributor headquartered in Hickory, NC, gave the lowdown on its Oracle ERP Cloud implementation to a captive audience at OpenWorld. Maybe it was the forthright discussion or the lively Q&A, or both—this session was a hit.

7 things to know about implementing Oracle’s cloud applications

Katie Eure and Derrick Walters, Alex Lee’s accounting and corporate applications managers respectively, shared seven things they learned from their cloud implementation. Here’s what they told the audience.

1. Think through configuration options.

Take time to consider the downstream effects of your configuration decisions. For example, a long ledger name can take up valuable screen space in the reporting tools. If you don’t know the impact of a certain configuration, you should ask your implementation partner.

2. Embrace the idea of no customizations.

When people push back on the no customization rule, they probably don’t understand the configurability of Oracle’s cloud applications or the extensibility that PaaS offers. But don’t let that deter you. Eure and Walters reiterated the benefits of cloud: “Putting a system in place that prevents customization can work in your favor. It helps you through the upgrade and maintenance process.”

3. Don’t underestimate change management.

Based on Alex Lee’s experience, “change management is going to be a bigger challenge than you think.  Even if you get up and running quickly in the cloud, it doesn’t mean any less change for your people.” In fact, Eure reported that they pushed out the go live dates for Oracle Procurement and Travel & Expense Management Clouds so that Finance could get comfortable with Accounts Payable. It wasn’t a matter of the system not being ready; the people weren’t ready.

4. Communicate the positive impact of moving to the Cloud.

Make sure users understand the tradeoffs between on-premises and cloud software. For example, if people complain that they no longer can call IT and demand immediate help, remind them that changes to on-premises software on average come every 10 years or so, whereas changes to cloud applications occur multiple times a year. In addition, Oracle now has a window into how organizations use the software which they didn’t have before.

5. Consider Oracle Support as part of your project team.

In a cloud implementation, you’ll be working with Oracle Support from Day 1. They’re still there to address issues, but they also provide help with new functionality and a path to cloud operations. For example, if you want to make a production-to-test copy or a test-to-test copy, you contact Oracle Support. Learn how to navigate the Support organization while you’re in the implementation phase because that will put you on a strong footing going forward.

6. Start the upgrade process right away.

As soon as you know your upgrade window, be sure to allow for a couple months in your test environment. You’ll want to look at new functionality and learn how to take advantage of it. At Alex Lee, the majority of training happens through testing.

7. Leverage your partnership with Oracle.

If you need advice or have a problem, find the right people in Oracle to help you. As Eure told the audience, “most people have never heard of Alex Lee. But we feel like Oracle listens to us and wants to partner with us.”

Switchover and Failover with Dbvisit 8

Yann Neuhaus - Wed, 2016-11-30 12:19

In this blog we will talk about how to do a switchover and how to do a failover. We suppose that dbvisit is already installed and that a standby database is already configured. Our instance is named DBPRIMA.

Switchover
SWITCHOVER is the fact to change database role. The primary becomes the standby and the standby becomes the primary. This can be useful for many reasons.
Before performing a switchover, we have first to send archived logs if any not already sent on the primary server. For this we use the magic command dbvctl

[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 9201)
dbvctl started on dbvisit2: Tue Nov 29 14:46:15 2016
=============================================================
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 30 (1_30_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:46:17 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

After let’s apply all archived logs on the standby

[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 9201)
dbvctl started on dbvisit2: Tue Nov 29 14:46:15 2016
=============================================================
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 30 (1_30_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:46:17 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

Note that the commands above can be scheduled on both servers using crontab for example on linux system
On the Primary Server:

00,10,20,30,40,50 * * * * cd /u01/app/dbvisit/standby; ./dbvctl -d DBPRIMA >>/dev/null 2>&1

On the Standby Server:

00,10,20,30,40,50 * * * * cd /u01/app/dbvisit/standby; ./dbvctl -d DBPRIMA >>/dev/null 2>&1

After sending and applying archived logs, we can check the status of the synchronization

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -i
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4420)
dbvctl started on dbvisit1: Tue Nov 29 14:26:11 2016
=============================================================
Dbvisit Standby log gap report for DBPRIMA thread 1 at 201611291426:
-------------------------------------------------------------
Destination database on dbvisit2 is at sequence: 29.
Source database on dbvisit1 is at log sequence: 30.
Source database on dbvisit1 is at archived log sequence: 29.
Dbvisit Standby last transfer log sequence: 29.
Dbvisit Standby last transfer at: 2016-11-29 14:24:16.
Archive log gap for DBPRIMA: 0.
Transfer log gap for DBPRIMA: 0.
Standby database time lag (DAYS-HH:MI:SS): +0:01:37.
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:26:12 2016
=============================================================
[oracle@dbvisit1 ~]$

If all is ok, we can now we can go for the SWITCHOVER. We can do it either by the graphical console or by command line. We are using the command line method. The command should be run only on the primary server.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 5081)
dbvctl started on dbvisit1: Tue Nov 29 14:47:32 2016
=============================================================
>>> Starting Switchover between dbvisit1 and dbvisit2
Running pre-checks ... done
Pre processing ... done
Processing primary ... done
Processing standby ... done
Converting standby ... done
Converting primary ... done
Completing ... done
Synchronizing ... done
Post processing ... done
>>> Graceful switchover completed.
Primary Database Server: dbvisit2
Standby Database Server: dbvisit1
>>> Dbvisit Standby can be run as per normal:
dbvctl -d DBPRIMA
PID:5081
TRACE:5081_dbvctl_switchover_DBPRIMA_201611291447.trc
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:50:23 2016
=============================================================
[oracle@dbvisit1 ~]$

Failover
The FAILOVER process happens when the primary database is no longer working. In this case the standby should be activated and will become the primary one. This FAILOVER process is NOT reversible unlike SWITCHOVER process.
A good practice before activating the standby database is to run a quick test to ensure that the standby database is in a consistent state (datafile headers and controlfile is in sync) and ready to be activated. This can be done by opening the standby database read-only.

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o read
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2542)
dbvctl started on dbvisit2: Wed Nov 30 09:40:50 2016
=============================================================
Open Standby Database DBPRIMA in READ ONLY mode...
Standby Database DBPRIMA opened in READ ONLY mode.
Log files cannot be applied to Database while in READ ONLY mode.
Database tempfile(s) may need to be added to this database.
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:40:55 2016
=============================================================
[oracle@dbvisit2 ~]$

As we don’t get any error and that we know we can open the standby read-only, let’s start it back into recovery mode (mount state).

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o restart
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2667)
dbvctl started on dbvisit2: Wed Nov 30 09:45:45 2016
=============================================================
Stopping instance DBPRIMA...
Standby Instance DBPRIMA shutdown successfully on dbvisit2.
Starting instance DBPRIMA...
Standby Instance DBPRIMA started
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:45:57 2016
=============================================================
[oracle@dbvisit2 ~]$

And proceed with the FAILOVER. Dbvisit will ask to confirm. We can use the –force option to avoid this.

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o activate
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2796)
dbvctl started on dbvisit2: Wed Nov 30 09:47:12 2016
=============================================================
Activating means this database will become a Primary Database.
It will no longer be a Standby Database for DBPRIMA on dbvisit1.
Activation cannot be reversed.
=>Activate Standby Database on dbvisit2? [No]: yes
Are you sure? [No]: yes
>>> Activating now...
File /u01/app/dbvisit/standby/conf/dbv_DBPRIMA.env copied to
/u01/app/dbvisit/standby/conf/dbv_DBPRIMA.env.201611300947.
>>> Activation complete. Please ensure a backup of this Database is made
>>> Old archives from before the activation should be removed to avoid mix-up between new
and old archive logs
>>> If the Dbvisit Standby process is to be reversed, then database on dbvisit1 will need
to be rebuilt as a Standby Database
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:48:07 2016
=============================================================
[oracle@dbvisit2 ~]$

In this article we have seen how easy it is to do a switchover or failover with Dbvisit 8 .

 

Cet article Switchover and Failover with Dbvisit 8 est apparu en premier sur Blog dbi services.

Creating standby database with Dbvisit 8

Yann Neuhaus - Wed, 2016-11-30 12:13

The new version of Dbvisit standby is now released. In this blog we are going to see how to install dbvisit 8 and how to create a standby database.
For the installation after downloading the software, just uncompress it and then run the install-dbvisit command under the installer directory. The installation process is divided in 2 parts.
1- Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent) –On both servers
2- Dbvserver console installation. Just note that the console now needs to be installed only on one server.

We will not show all outputs, more info for the installation can be found in my precedent blog about upgrading dbvisit 7 to dbvisit 8

[oracle@dbvisit1 installer]$ pwd
/home/oracle/dbvisit/dbvisit/dbvisit/installer
[oracle@dbvisit1 installer]$ ls
install-dbvisit
[oracle@dbvisit1 installer]$
[oracle@dbvisit1 installer]$ ./install-dbvisit
-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------
It is recommended to make a backup of our current Dbvisit software
location (Dbvisit Base location) for rollback purposes.
Installer Directory /home/oracle/dbvisit/dbvisit/dbvisit
>>> Please specify the Dbvisit installation directory (Dbvisit Base).
The various Dbvisit products and components - such as Dbvisit Standby,
Dbvisit Dbvnet will be installed in the appropriate subdirectories of
this path.
Enter a custom value or press ENTER to accept default [/usr/dbvisit]:
> /u01/app/dbvisit
DBVISIT_BASE = /u01/app/dbvisit
-----------------------------------------------------------
Component Installer Version Installed Version
-----------------------------------------------------------
standby 8.0.04.18184 not installed
dbvnet 2.0.04.18184 not installed
dbvagent 2.0.04.18184 not installed
dbvserver 2.0.04.18184 not installed
-----------------------------------------------------------
What action would you like to perform?
1 - Install component(s)
2 - Uninstall component(s)
3 - Terminate
Your choice: 1
Which component do you want to install?
1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
2 - Dbvisit Standby Core (Command Line Interface)
3 - Dbvnet (Dbvisit Network Communication)
4 - Dbvagent (Dbvisit Agent)
5 - Dbvserver (Dbvisit Central Console)
6 - Exit Installer
Your choice: 1

And then follow instructions.
At the end of the proceesus we can start the different components: the dbvagent and the dbvserver


[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvagent/dbvagent -d start


[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvnet/dbvnet -d start


[oracle@dbvisit1 installer]$ ps -ef|egrep 'dbvagent|dbvnet' | grep -v grep
oracle 4064 1 0 10:46 ? 00:00:00 /u01/app/dbvisit/dbvagent/dbvagent -d start
oracle 4140 1 0 10:47 ? 00:00:00 /u01/app/dbvisit/dbvnet/dbvnet -d start
[oracle@dbvisit1 installer]$

After core components installation, we can proceed with Dbvisit Standby Central console installation (dbvserver). Just launch again install-dbvisit command and follow instructions.

[oracle@dbvisit1 installer]$ ./install-dbvisit
-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------
It is recommended to make a backup of our current Dbvisit software
location (Dbvisit Base location) for rollback purposes.
Installer Directory /home/oracle/dbvisit/dbvisit/dbvisit
.....
.....
What action would you like to perform?
1 - Install component(s)
2 - Uninstall component(s)
3 - Terminate
Your choice: 1
Which component do you want to install?
1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
2 - Dbvisit Standby Core (Command Line Interface)
3 - Dbvnet (Dbvisit Network Communication)
4 - Dbvagent (Dbvisit Agent)
5 - Dbvserver (Dbvisit Central Console)
6 - Exit Installer
Your choice: 5

At the end of the installation, We can now start dbserver

[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvserver/dbvserver -d start


[root@dbvisit1 ~]# netstat -taupen | grep dbvser
tcp 0 0 0.0.0.0:4433 0.0.0.0:* LISTEN 1000 37848 5348/dbvserver
[root@dbvisit1 ~]#

We should be able to connect to dbvserver console on port 4433, with admin user. The default password is admin
dbvserverconsole1
dbvserverconsole2
For example using MANAGE USERS, we can change default admin password.
Now that dbvisit standby is installed on both server, the first thing is to test connectivity. For this we will use the command dbvctl which is the main command of dbvisit 8. In dbvisit documentation you can find The first command you need to be fimiliar with is the “dbvctl -h” command which will display the syntax and usage options for the Dbvisit Standby command line interface. and it’s really true. The command dbvctl -f system_readiness will be used.

[oracle@dbvisit1 installer]$ /u01/app/dbvisit/standby/dbvctl -f system_readiness
Please supply the following information to complete the test.
Default values are in [].
Enter Dbvisit Standby location on local server: [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Enter the name of the remote server: []: dbvisit2
Your input: dbvisit2
Is this correct? <Yes/No> [Yes]:
Enter Dbvisit Standby location on remote server: [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Enter the name of a file to transfer relative to local install directory
/u01/app/dbvisit: [standby/doc/README.txt]:
Your input: standby/doc/README.txt
Is this correct? <Yes/No> [Yes]:
Choose copy method:
1) /u01/app/dbvisit/dbvnet/dbvnet
2) /usr/bin/scp
Please enter choice [1] : 1
Is this correct? <Yes/No> [Yes]:
Enter port for method /u01/app/dbvisit/dbvnet/dbvnet: [7890]:
Your input: 7890
Is this correct? <Yes/No> [Yes]:
-------------------------------------------------------------
Testing the network connection between local server and remote server dbvisit2.
-------------------------------------------------------------
Settings
========
Remote server =dbvisit2
Dbvisit Standby location on local server =/u01/app/dbvisit
Dbvisit Standby location on remote server =/u01/app/dbvisit
Test file to copy =/u01/app/dbvisit/standby/doc/README.txt
Transfer method =/u01/app/dbvisit/dbvnet/dbvnet
port =7890
-------------------------------------------------------------
Checking network connection by copying file to remote server dbvisit2...
-------------------------------------------------------------
Trace file /u01/app/dbvisit/standby/trace/5589_dbvctl_system_readiness_201611291139.trc
File copied successfully. Network connection between local and dbvisit2
correctly configured.
[oracle@dbvisit1 installer]$

If everything is fine with the installation, now we can create a standby database using dbvctl. The configuration we use is the following:
Primary Server : dbvisit1
Standby Server: dbvisit2
Database Instance : DBPRIMA ( We suppose that database is in archive mode and in force logging mode )
First Let’s create Dbvisit Standby Database Configuration (DDC) file.The name of the DDC is generally the name of the database instance. The command dbvctl -o setup is used.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -o setup
========================================================
Dbvisit Standby Database Technology (8.0.04.18184)
=========================================================
=>dbvctl only needs to be run on the primary server.
Is this the primary server? <Yes/No> [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:
DDC
===
1) Create New DDC
2) Cancel
Please enter choice [] : 1
Is this correct? <Yes/No> [Yes]:
END USER LICENSE AGREEMENT
PLEASE READ THIS END USER LICENSE AGREEMENT (AGREEMENT) CAREFULLY BEFORE
DOWNLOADING THE LICENSED PRODUCT. BY CLICKING I AGREE BELOW, YOU ARE
REPRESENTING THAT YOU HAVE THE RIGHT AND AUTHORITY TO LEGALLY BIND YOURSELF
…….
…….
of it on future occasions. Headings are for reference purposes only and
have no substantive effect.
Continue ? <Yes/No> [No]: yes
=========================================================
Dbvisit Standby setup begins.
=========================================================
The following Oracle instance(s) have been found on this server:
SID ORACLE_HOME
=== ===========
1) DBPRIMA /u01/app/oracle/product/12.1.0.2/dbhome_1
2) Enter own ORACLE_SID and ORACLE_HOME
Please enter choice [] : 1
Is this correct? <Yes/No> [Yes]:
=>ORACLE_SID will be: DBPRIMA
=>ORACLE_HOME will be: /u01/app/oracle/product/12.1.0.2/dbhome_1
------------------------------------------------------------------------------
Enter the primary server name.
Note: The hostname is case sensitive on UNIX based systems.
=>SOURCE is [dbvisit1]:
Your input: dbvisit1
Is this correct? <Yes/No> [Yes]:
Choice is dbvisit1
------------------------------------------------------------------------------
Please enter a filesystem directory that Dbvisit Standby use to store (archive) log files. This directory
is not the same as the database recovery area or archive destinations and should not be located in these
areas.
....
Please ensure that this directory exists on
=>ARCHSOURCE is [/u01/app/oracle/dbvisit_arch/DBPRIMA]: /u01/app/archivedbvisit/DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/archivedbvisit/DBPRIMA
------------------------------------------------------------------------------
Do you want to use SSH to connect to the standby server? Note that if you are using SSH, passwordless SSH
authentication between the hosts must already be configured. By default Dbvnet will be used.
=>USE_SSH is [N]:
Your input: N
Is this correct? <Yes/No> [Yes]:
Choice is N
------------------------------------------------------------------------------
Enter the standby database hostname.
If the standby database will be Oracle RAC enabled:
Enter the the Virtual Hostname (linked to a Virtual IP) for standby database. This virtual hostname is
....
For non-RAC configurations specify the standby database server name here.
=>DESTINATION is []: dbvisit2
Your input: dbvisit2
Is this correct? <Yes/No> [Yes]:
Choice is dbvisit2
------------------------------------------------------------------------------
Specify the DBVNET or SSH port number on the standby server. The default value supplied is the dbvnet port
7890. If you specified the use of SSH, please specify the SSH port here.
=>NETPORT is [7890]:
Your input: 7890
Is this correct? <Yes/No> [Yes]:
Choice is 7890
------------------------------------------------------------------------------
Enter Dbvisit Standby installation directory on the standby server
=>DBVISIT_BASE_DR is [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/dbvisit
------------------------------------------------------------------------------
Enter ORACLE_HOME directory on the standby server
=>ORACLE_HOME_DR is [/u01/app/oracle/product/12.1.0.2/dbhome_1]:
Your input: /u01/app/oracle/product/12.1.0.2/dbhome_1
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/oracle/product/12.1.0.2/dbhome_1
------------------------------------------------------------------------------
Enter DB_UNIQUE_NAME on the standby server
=>DB_UNIQUE_NAME_DR is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Please enter the directory where Dbvisit Standby will transfer the (archive) log files to on standby
server.
...
Please ensure that this directory exists on the standby server
=>ARCHDEST is [/u01/app/archivedbvisit/DBPRIMA]:
Your input: /u01/app/archivedbvisit/DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/archivedbvisit/DBPRIMA
------------------------------------------------------------------------------
Enter ORACLE_SID on the standby server
=>ORACLE_SID_DR is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Enter ASM instance name on the standby server, if your standby is using ASM. If you are not using ASM on
the standby leave the value blank.
=>ORACLE_SID_ASM_DR is []:
Is this correct? <Yes/No> [Yes]:
Choice is null
------------------------------------------------------------------------------
Please specify the name of the Dbvisit Database configuration (DDC) file.
The DDC file is a plain text file that contains all the Dbvisit Standby settings.
=>ENV_FILE is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Below are the list of configuration variables provided during the setup process:
Configuration Variable Value Provided
====================== ==============
ORACLE_SID DBPRIMA
ORACLE_HOME /u01/app/oracle/product/12.1.0.2/dbhome_1
SOURCE dbvisit1
ARCHSOURCE /u01/app/archivedbvisit/DBPRIMA
RAC_DR N
USE_SSH N
DESTINATION dbvisit2
NETPORT 7890
DBVISIT_BASE_DR /u01/app/dbvisit
ORACLE_HOME_DR /u01/app/oracle/product/12.1.0.2/dbhome_1
DB_UNIQUE_NAME_DR DBPRIMA
ARCHDEST /u01/app/archivedbvisit/DBPRIMA
ORACLE_SID_DR DBPRIMA
ENV_FILE DBPRIMA
Are these variables correct? <Yes/No> [Yes]:
>>> Dbvisit Database configuration (DDC) file DBPRIMA created.
>>> Dbvisit Database repository (DDR) DBPRIMA created.
Repository Version 8.0
Software Version 8.0
Repository Status VALID
PID:2330
TRACE:dbvisit_install.log
[oracle@dbvisit1 ~]$

The DDC file we created will be used to create the standby database. But before we must enter the license key, otherwise we will have an error.Following command is used dbvctl -d DDC -l License_Key

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -l 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 3835)
dbvctl started on dbvisit1: Tue Nov 29 14:12:08 2016
=============================================================
=>Update with license key: 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq? <Yes/No> [Yes]:
>>> Dbvisit Standby License
License Key : 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq
customer_number : 1
dbname :
expiry_date : 2016-12-29
product_id : 8
sequence : 1
status : VALID
updated : YES
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:12:17 2016
=============================================================
[oracle@dbvisit1 ~]$

And then we can proceed with the standby database creation using the command dbvctl -d DDC –csd

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA --csd
>>> Running pre-checks please wait... done
What would you like to do:
1 - Create standby database (and optionally save settings in template)
2 - Help
3 - Terminate processing
Please enter your choice [1]: 1
-------------------------------------------------------------------------------
Answer Yes to transfer the datafiles directly to the standby server (not
recommended for large database or slow networks)
Note tablespaces will be placed in backup mode for direct copy.
Answer No (recommended) to create an RMAN backup of primary datafiles in a
temporary location and transfer the backup to the standby server
=>Do you want to copy database files directly to the standby server? [N]: y
Your input: Y
Is this correct? <Yes/No> [Yes]: yes
The following oracle database parameters will be set in the standby database pfile or spfile:
-------------------------------------------------------------------------------
SID NAME VALUE
* audit_file_dest /u01/app/oracle/admin/DBPRIMA/adump
* compatible 12.1.0.2.0
* control_files /u01/app/oracle/oradata/DBPRIMA/control01.ctl,/u01/app/oracle/fast_recovery_area/DBPRIMA/control02.ctl
* db_block_size 8192
* db_domain
* db_name DBPRIMA
* db_recovery_file_dest /u01/app/oracle/fast_recovery_area
* db_recovery_file_dest_size 10485760000
* db_unique_name DBPRIMA
* diagnostic_dest /u01/app/oracle
* dispatchers (PROTOCOL=TCP) (SERVICE=DBPRIMAXDB)
* log_archive_format %t_%s_%r.dbf
* open_cursors 300
* pga_aggregate_target 304087040
* processes 300
* remote_login_passwordfile EXCLUSIVE
* sga_target 912261120
* spfile OS default
* undo_tablespace UNDOTBS1
-------------------------------------------------------------------------------
What would you like to do:
1 - Proceed with creating the standby database
2 - Edit oracle database parameters for the standby database pfile/spfile
3 - Terminate processing
Please enter your choice [1]: 1
......
=>Create standby database template for DBPRIMA using provided answers? [Y]:
Your input: 1
Is this correct? <Yes/No> [Yes]:
-------------------------------------------------------------------------------
=>Continue with creating a standby database? (If No processing will terminate,
the saved template will be available for future use) [Y]:
Is this correct? <Yes/No> [Yes]:
>>> dbvctl will now run a pre-flight check for standby database creation. An attempt will
be made to create a standby (s)pfile using oracle standby database parameters, followed
by trying to start the standby instance. If this step fails, then please double-check
the following items before re-running dbvctl again:
1) Review the standby database parameters you have supplied and provide valid values
unless a template is used.
2) Recreate the template to provide valid values for standby database parameters if a
template is used.
>>> Running pre-flight check for standby creation, please wait... done
>>> Total database size for DBPRIMA is 1.57GB
>>> Creating standby control file... done
>>> Transferring datafiles from dbvisit1 to dbvisit2...
Transferring /u01/app/oracle/oradata/DBPRIMA/system01.dbf...
Transferring /u01/app/oracle/oradata/DBPRIMA/sysaux01.dbf... done
Transferring /u01/app/oracle/oradata/DBPRIMA/undotbs01.dbf... done
Transferring /u01/app/oracle/oradata/DBPRIMA/users01.dbf... done
>>> Restoring standby control files... done
>>> Starting standby database DBPRIMA on dbvisit2 mount... done
>>> Restoring datafiles on dbvisit2...
>>> Renaming standby redo logs and tempfiles on dbvisit2... done
>>> Performing checkpoint and archiving logs... done
>>> Finishing standby database creation... done
>>> Standby database created.
To complete creating standby database please run dbvctl on the primary server first,
then on the standby server, to ensure the standby database is in sync with the primary
database.
PID:3915
TRACE:3915_dbvctl_csd_DBPRIMA_201611291413.trc

Let’s now run dbvctl -d DDC on the primary server to send all archived logs and on the standby server to apply all archived logs.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4354)
dbvctl started on dbvisit1: Tue Nov 29 14:24:11 2016
=============================================================
>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
>>> Sending heartbeat message... skipped
Performing a log switch...
>>> Transferring Log file(s) from DBPRIMA on dbvisit1 to dbvisit2 for thread 1:
thread 1 sequence 29 (o1_mf_1_29_d3v083x6_.arc)
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:24:16 2016
=============================================================
[oracle@dbvisit1 ~]$


[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 8665)
dbvctl started on dbvisit2: Tue Nov 29 14:25:07 2016
=============================================================
>>> Sending heartbeat message... skipped
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 29 (1_29_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:25:09 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

We can check the sync status with dbvctl -d DDC -i

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -i
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4420)
dbvctl started on dbvisit1: Tue Nov 29 14:26:11 2016
=============================================================
Dbvisit Standby log gap report for DBPRIMA thread 1 at 201611291426:
-------------------------------------------------------------
Destination database on dbvisit2 is at sequence: 29.
Source database on dbvisit1 is at log sequence: 30.
Source database on dbvisit1 is at archived log sequence: 29.
Dbvisit Standby last transfer log sequence: 29.
Dbvisit Standby last transfer at: 2016-11-29 14:24:16.
Archive log gap for DBPRIMA: 0.
Transfer log gap for DBPRIMA: 0.
Standby database time lag (DAYS-HH:MI:SS): +0:01:37.
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:26:12 2016
=============================================================
[oracle@dbvisit1 ~]$

Now it’s time to import our configuration in our graphical console. For this let’s use MANAGE HOSTS
hosts

And then add hosts using the option NEW
hosts2

Fill info and click on CREATE NEW HOST
hosts3
After we add all hosts
hosts4

We can now import our standby configuration using the MANAGE CONFIGURATIONS tab.
config1

Using IMPORT
config2
config3

Now we can manage our configuration either by command line using dbvctl or by using the graphical console. In a next blog we will talk about how to do a SWITCHOVER and a FAILOVER

 

Cet article Creating standby database with Dbvisit 8 est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 5 – Generating DDL commands

Yann Neuhaus - Wed, 2016-11-30 11:17

From time to time it is very useful that you can generate the DDL commands for existing objects (Tables, Indexes, whole Schema …). In Oracle you can either use the dbms_metadata PL/SQL package for this or use expdp/impdp to generate the statements out of a dump file. What options do you have in PostgreSQL? Note: We’ll not look at any third party tools you could use for that, only plain PostgreSQL.

As always we’ll need some objects to test with, so here we go:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1
CREATE FUNCTION add(integer, integer) RETURNS integer
    AS 'select $1 + $2;'
    LANGUAGE SQL
    IMMUTABLE
    RETURNS NULL ON NULL INPUT;

PostgreSQL comes with a set of administration functions which can be used to query various stuff. Some are there to get the definitions for your objects.

You can get the definition of a view:

(postgres@[local]:5439) [ddl] > select pg_get_viewdef('v1'::regclass, true);
 pg_get_viewdef 
----------------
  SELECT t1.a  +
    FROM t1;
(1 row)

You can get the definition of a constraint:

(postgres@[local]:5439) [ddl] > SELECT conname
                                     , pg_get_constraintdef(r.oid, true) as definition
                                  FROM pg_constraint r
                                 WHERE r.conrelid = 't1'::regclass;
 conname |     definition      
---------+---------------------
 con1    | CHECK (a < 2000000)

You can get the definition of a function:

(postgres@[local]:5439) [ddl] > SELECT proname
     , pg_get_functiondef(a.oid)
  FROM pg_proc a
 WHERE a.proname = 'add';
 proname |                   pg_get_functiondef                    
---------+---------------------------------------------------------
 add     | CREATE OR REPLACE FUNCTION public.add(integer, integer)+
         |  RETURNS integer                                       +
         |  LANGUAGE sql                                          +
         |  IMMUTABLE STRICT                                      +
         | AS $function$select $1 + $2;$function$                 +
         | 
--OR
(postgres@[local]:5439) [ddl] > SELECT pg_get_functiondef(to_regproc('add'));
                   pg_get_functiondef                    
---------------------------------------------------------
 CREATE OR REPLACE FUNCTION public.add(integer, integer)+
  RETURNS integer                                       +
  LANGUAGE sql                                          +
  IMMUTABLE STRICT                                      +
 AS $function$select $1 + $2;$function$                 +

You can get the definition of an index:

(postgres@[local]:5439) [ddl] > select pg_get_indexdef('i1'::regclass);
            pg_get_indexdef            
---------------------------------------
 CREATE INDEX i1 ON t1 USING btree (a)
(1 row)

But surprisingly you can not get the DDL for a table. There is just no function available to do this. How can you do that without concatenating the definitions you can get out of the PostgreSQL catalog? The only option I am aware of is pg_dump:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -s -t t1 ddl | egrep -v "^--|^$"
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
CREATE TABLE t1 (
    a integer,
    b integer,
    CONSTRAINT con1 CHECK ((a < 2000000))
);
ALTER TABLE t1 OWNER TO postgres;
CREATE INDEX i1 ON t1 USING btree (a);
CREATE UNIQUE INDEX i2 ON t1 USING btree (b);

Using the “-s” (schema only) and “-t” (tables) options you get the DDL for the complete table. Not as handy as in Oracle where you can do this in sqlplus but it works and produces a result you can work with.

Of course you can always create the DDLs for your own by querying the catalog, e.g. pg_attribute which holds all the column definitions for the tables:

    Table "pg_catalog.pg_attribute"
    Column     |   Type    | Modifiers 
---------------+-----------+-----------
 attrelid      | oid       | not null
 attname       | name      | not null
 atttypid      | oid       | not null
 attstattarget | integer   | not null
 attlen        | smallint  | not null
 attnum        | smallint  | not null
 attndims      | integer   | not null
 attcacheoff   | integer   | not null
 atttypmod     | integer   | not null
 attbyval      | boolean   | not null
 attstorage    | "char"    | not null
 attalign      | "char"    | not null
 attnotnull    | boolean   | not null
 atthasdef     | boolean   | not null
 attisdropped  | boolean   | not null
 attislocal    | boolean   | not null
 attinhcount   | integer   | not null
 attcollation  | oid       | not null
 attacl        | aclitem[] | 
 attoptions    | text[]    | 
 attfdwoptions | text[]    | 
Indexes:
    "pg_attribute_relid_attnam_index" UNIQUE, btree (attrelid, attname)
    "pg_attribute_relid_attnum_index" UNIQUE, btree (attrelid, attnum)

One nasty way which which is even documented on the PostgreSQL wiki is this:

(postgres@[local]:5439) [ddl] > create extension plperlu;
CREATE EXTENSION
Time: 90.074 ms
(postgres@[local]:5439) [ddl] > \dx
                      List of installed extensions
  Name   | Version |   Schema   |              Description               
---------+---------+------------+----------------------------------------
 plperlu | 1.0     | pg_catalog | PL/PerlU untrusted procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [ddl] > CREATE OR REPLACE FUNCTION system(text) RETURNS text 
AS 'my $cmd=shift; return `cd /tmp;$cmd`;' LANGUAGE plperlu;
CREATE FUNCTION

(postgres@[local]:5439) [ddl] > select system('pg_dump -s -t t1 ddl | egrep -v "^--|^$"');
                    system                     
-----------------------------------------------
 SET statement_timeout = 0;                   +
 SET lock_timeout = 0;                        +
 SET idle_in_transaction_session_timeout = 0; +
 SET client_encoding = 'UTF8';                +
 SET standard_conforming_strings = on;        +
 SET check_function_bodies = false;           +
 SET client_min_messages = warning;           +
 SET row_security = off;                      +
 SET search_path = public, pg_catalog;        +
 SET default_tablespace = '';                 +
 SET default_with_oids = false;               +
 CREATE TABLE t1 (                            +
     a integer,                               +
     b integer,                               +
     CONSTRAINT con1 CHECK ((a < 2000000))    +
 );                                           +
 ALTER TABLE t1 OWNER TO postgres;            +
 CREATE INDEX i1 ON t1 USING btree (a);       +
 CREATE UNIQUE INDEX i2 ON t1 USING btree (b);+
 

Can be a workaround. Hope this helps…

 

Cet article Can I do it with PostgreSQL? – 5 – Generating DDL commands est apparu en premier sur Blog dbi services.

Fishbowl Solutions Webinar: Quality, Safety, and Knowledge Management with Oracle WebCenter Content and ControlCenter

WebCenter Team - Wed, 2016-11-30 08:32
Topic:
Quality, Safety, and Knowledge Management with Oracle WebCenter Content and ControlCenter
Description:
Join Ryan Companies Vice President of Construction Operations, Mike Ernst, and Fishbowl Solutions Product Manager, Kim Negaard, to learn how Ryan Companies, a leading national construction firm, found knowledge management success with ControlCenter for Oracle WebCenter Content.

In this webinar, you'll hear first-hand how ControlCenter has been implemented as part of Ryan’s Integrated Project Delivery Process helping them create a robust knowledge management system to promote consistent and effective operations across multiple regional offices. You’ll also learn how ControlCenter's intuitive, modern user experience enabled Ryan to easily find documents across devices, implement reoccurring review cycles, and control both company-wide and project-specific documents throughout their lifecycle.

Register today.

New to Zoom? Go to https://www.zoom.us/test to be sure you will be able to access the webinar.
Time:

Dec 8, 2016 12:00 PM in (GMT-6:00) Central Time (US and Canada)

custom image

Oracle Data Guard Broker and Static Service Registration (Doc ID 1387859.1)

Michael Dinh - Wed, 2016-11-30 07:37

This is a quick and dirty note to self about Doc ID 1387859.1 which may be incorrect for Single Instance Database with Oracle Restart.

Please share if you have configuration for Single Instance Database with Oracle Restart using DataGuard Broker and listener registered from Grid – TIA.

Oracle Data Guard Broker and Static Service Registration (Doc ID 1387859.1)	

+++ DO NOT BELIEVE THIS TO BE CORRECT - FAILED FOR TEST CASE.

Single Instance Database with Oracle Restart

Here there is no cluster, but clusterware has been installed to enable the Oracle Restart feature. 

The local listener LISTENER has its LISTENER.ORA located in the /network/admin directory of the Oracle Grid Infrastructure home. 

In this case the static service registration is:
SID_LIST_LISTENER=
  (SID_LIST=
    (SID_DESC=
     (GLOBAL_DBNAME=db_unique_name_DGMGRL.db_domain)
     (ORACLE_HOME=oracle_home)
     (SID_NAME=sid_name)
    )
  )

As with Single Instance databases, the SID_NAME value sid_name will default to the db_unique_name.

+++ SUCCESS
Data Guard & Oracle Restart in 11gR2 https://uhesse.com/2010/09/06/data-guard-oracle-restart-in-11gr2/

DEMO: Listener configured from DB Home and NOT GI Home which failed.

oracle@arrow2:HAWKB:/u01/app/oracle/diag/rdbms/hawkb/HAWKB/trace
$ ll
total 0
-rw-r-----. 1 oracle oinstall 0 Nov 30 05:13 alert_HAWKB.log
-rw-r-----. 1 oracle oinstall 0 Nov 30 05:13 drcHAWKB.log
oracle@arrow2:HAWKB:/u01/app/oracle/diag/rdbms/hawkb/HAWKB/trace
$ srvctl config listener -l listener11g
Name: LISTENER11G
Home: /u01/app/oracle/product/11.2.0.4/db_1
End points: TCP:1551

oracle@arrow2:HAWKB:/u01/app/oracle/diag/rdbms/hawkb/HAWKB/trace
$ lsnrctl status listener11g

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 30-NOV-2016 05:14:12

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1551)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER11G
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                30-NOV-2016 05:03:54
Uptime                    0 days 0 hr. 10 min. 17 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1551)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow2.localdomain)(PORT=1551)))
Services Summary...
Service "HAWKB.localdomain" has 2 instance(s).
  Instance "HAWKB", status UNKNOWN, has 1 handler(s) for this service...
  Instance "HAWKB", status READY, has 1 handler(s) for this service...
Service "HAWKB_DGB.localdomain" has 1 instance(s).
  Instance "HAWKB", status READY, has 1 handler(s) for this service...
Service "HAWKB_DGMGRL.localdomain" has 1 instance(s).
  Instance "HAWKB", status UNKNOWN, has 1 handler(s) for this service...
Service "HAWKB_SVC.localdomain" has 1 instance(s).
  Instance "HAWKB", status READY, has 1 handler(s) for this service...
Service "HAWK_SVC.localdomain" has 1 instance(s).
  Instance "HAWKB", status READY, has 1 handler(s) for this service...
The command completed successfully

oracle@arrow2:HAWKB:/u01/app/oracle/diag/rdbms/hawkb/HAWKB/trace
$ crs_stat
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER11G.lsnr
               ONLINE  ONLINE       arrow2                   STABLE
ora.ons
               OFFLINE OFFLINE      arrow2                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       arrow2                   STABLE
ora.hawkb.db
      1        ONLINE  ONLINE       arrow2                   Open,STABLE
ora.hawkb.hawk_svc.svc
      1        ONLINE  ONLINE       arrow2                   STABLE
--------------------------------------------------------------------------------

oracle@arrow2:HAWKB:/u01/app/oracle/diag/rdbms/hawkb/HAWKB/trace
$ dgmgrl sys/oracle@hawka
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawkb - Primary database
    hawka - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawkb

Database - hawkb

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    HAWKB

Database Status:
SUCCESS

DGMGRL> show database hawka

Database - hawka

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      0 Byte/s
  Real Time Query: ON
  Instance(s):
    HAWKA

Database Status:
SUCCESS

DGMGRL> switchover to hawka
Performing switchover NOW, please wait...
New primary database "hawka" is opening...
Operation requires startup of instance "HAWKB" on database "hawkb"
Starting instance "HAWKB"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "hawka"
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawka - Primary database
    hawkb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawka

Database - hawka

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    HAWKA

Database Status:
SUCCESS

DGMGRL> show database hawkb

Database - hawkb

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       (unknown)
  Apply Rate:      (unknown)
  Real Time Query: ON
  Instance(s):
    HAWKB

Database Status:
SUCCESS

DGMGRL> exit

oracle@arrow2:HAWKB:/u01/app/oracle/diag/rdbms/hawkb/HAWKB/trace
$ crs_stat
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER11G.lsnr
               ONLINE  ONLINE       arrow2                   STABLE
ora.ons
               OFFLINE OFFLINE      arrow2                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       arrow2                   STABLE
ora.hawkb.db
      1        ONLINE  ONLINE       arrow2                   Open,Readonly,STABLE
ora.hawkb.hawk_svc.svc
      1        ONLINE  ONLINE       arrow2                   STABLE
--------------------------------------------------------------------------------
oracle@arrow2:HAWKB:/u01/app/oracle/diag/rdbms/hawkb/HAWKB/trace
$

++++++++++

oracle@arrow1:HAWKA:/u01/app/oracle/diag/rdbms/hawka/HAWKA/trace
$ ll
total 0
-rw-r--r--. 1 oracle oinstall 0 Nov 30 05:12 alert_HAWKA.log
-rw-r-----. 1 oracle oinstall 0 Nov 30 05:12 drcHAWKA.log

oracle@arrow1:HAWKA:/u01/app/oracle/diag/rdbms/hawka/HAWKA/trace
$ srvctl config listener -l listener11g
Name: LISTENER11G
Home: /u01/app/oracle/product/11.2.0.4/db_1
End points: TCP:1551

oracle@arrow1:HAWKA:/u01/app/oracle/diag/rdbms/hawka/HAWKA/trace
$ lsnrctl status listener11g

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 30-NOV-2016 05:14:39

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1551)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER11G
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                30-NOV-2016 05:04:43
Uptime                    0 days 0 hr. 9 min. 55 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1551)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow1.localdomain)(PORT=1551)))
Services Summary...
Service "HAWKA.localdomain" has 2 instance(s).
  Instance "HAWKA", status UNKNOWN, has 1 handler(s) for this service...
  Instance "HAWKA", status READY, has 1 handler(s) for this service...
Service "HAWKA_DGB.localdomain" has 1 instance(s).
  Instance "HAWKA", status READY, has 1 handler(s) for this service...
Service "HAWKA_DGMGRL.localdomain" has 1 instance(s).
  Instance "HAWKA", status UNKNOWN, has 1 handler(s) for this service...
Service "HAWKA_SVC.localdomain" has 1 instance(s).
  Instance "HAWKA", status READY, has 1 handler(s) for this service...
Service "HAWK_SVC.localdomain" has 1 instance(s).
  Instance "HAWKA", status READY, has 1 handler(s) for this service...
The command completed successfully

oracle@arrow1:HAWKA:/u01/app/oracle/diag/rdbms/hawka/HAWKA/trace
$ crs_stat
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER11G.lsnr
               ONLINE  ONLINE       arrow1                   STABLE
ora.ons
               OFFLINE OFFLINE      arrow1                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       arrow1                   STABLE
ora.hawka.db
      1        ONLINE  ONLINE       arrow1                   Open,Readonly,STABLE
ora.hawka.hawk_svc.svc
      1        ONLINE  ONLINE       arrow1                   STABLE
--------------------------------------------------------------------------------
oracle@arrow1:HAWKA:/u01/app/oracle/diag/rdbms/hawka/HAWKA/trace
$ crs_stat
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER11G.lsnr
               ONLINE  ONLINE       arrow1                   STABLE
ora.ons
               OFFLINE OFFLINE      arrow1                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       arrow1                   STABLE
ora.hawka.db
      1        ONLINE  ONLINE       arrow1                   Open,STABLE
ora.hawka.hawk_svc.svc
      1        ONLINE  ONLINE       arrow1                   STABLE
--------------------------------------------------------------------------------
oracle@arrow1:HAWKA:/u01/app/oracle/diag/rdbms/hawka/HAWKA/trace
$


Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE

Yann Neuhaus - Wed, 2016-11-30 07:03

I have setup a DataGuard environment and followed the instructions from Oracle to create the Standby Redo Logs. The Standby Redo Logs have to be the same size as the Online Redo Logs. If not, the RFS process won’t attach Standby Redo Logs, and you should have at least one more of the Standby Redo Log Group as you have for your Online Redo Log Group per Thread.

For my single instance, this should be quite straight forward, and so I issued the following commands on the primary and standby.

alter database add standby logfile group 4 size 1073741824;
alter database add standby logfile group 5 size 1073741824;
alter database add standby logfile group 6 size 1073741824;
alter database add standby logfile group 7 size 1073741824;

After setting all up, I started the new cool Broker command “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” and surprisingly found, that the validation complains that I do have insufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       3                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       3                       Insufficient SRLs

After looking everything up on Primary and Standby, the number of Log Groups and the sizes looked ok. I do have 3 Online Redo Log Groups with 1G each, and I have 4 Standby Redo Log Groups with 1G each.

-- Standby

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1          0 UNUSED           1073741824
         1          3          0 UNUSED           1073741824
         1          2          0 UNUSED           1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5        552 ACTIVE     1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

-- Primary

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1        550 INACTIVE         1073741824
         1          2        551 INACTIVE         1073741824
         1          3        552 CURRENT          1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

 

The only strange thing, is that the Standby Redo Log Group 7, shows up with Thread 0, instead of Thread 1.
Did not even know, that a thread 0 exists. It always starts with 1, and in case of RAC, you might see Thread 2, 3 or more. But if you want to, you can perfectly create thread 0 without any issues. For what reasons, I don’t know.

SQL> alter database add standby logfile thread 0 group 8 size 1073741824;

Database altered.

Ok. Lets correct the Thread 0 thing, and then lets see want the “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” shows.

-- On Standby
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-OFF';
Succeeded.

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4        553 ACTIVE     1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         1          7          0 UNASSIGNED 1073741824
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-ON';
Succeeded.
		 
-- On Primary

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

And here we go. Now I have sufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       4                       Sufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       4                       Sufficient SRLs

 

Conclusion

Even on a single instance, use the thread number in your create Standby Redo Log statement.

alter database add standby logfile thread 1 group 4 size 1073741824;
alter database add standby logfile thread 1 group 5 size 1073741824;
alter database add standby logfile thread 1 group 6 size 1073741824;
alter database add standby logfile thread 1 group 7 size 1073741824;

Cheers,
William

 

Cet article Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE est apparu en premier sur Blog dbi services.

Database Resource Manager, v$active_session_history p1text=location p2text=consumer group id

Tom Kyte - Wed, 2016-11-30 02:46
Hi Tom, Wish you all the best. 1] I have a 12c database 2] Not created as a CDB SQL> select cdb from v$database; CDB --- NO 3] No resource manager is configured SQL> SHOW PARAMETER RESOURCE_MANAGER_PLAN NAME ...
Categories: DBA Blogs

Combining WITH clauses

Tom Kyte - Wed, 2016-11-30 02:46
I've tried to combine an SQL defined in a WITH clause with an inline PL/SQL in 12c. <code>with function is_number(p_str varchar2) return number is l_number number; begin l_number := to_number(p_str); return l_number; exception when v...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator