Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 11 hours 38 min ago

UKOUG 2016 DAY 4

12 hours 34 min ago

uk4

Today is the last day at UKOUG in Birmingham; the first session I attended this morning was presented by Julian Dyke about installing and upgrading Oracle 12c release 2 Grid infrastructure and RAC.

He had the possibility to test the installation and upgrade phases at Oracle during 5 days at Oracle last spring. The following tests were done:

single instance : install 12.2.0.1, create database with dbca, upgrade 12.1.0.2 to 12..2.0.1 with dbua

RAC: install 12.2.0.2 grid infrastructure, install 12.2.0.1 RDBMS software, create ASM disk groups (ASMCA), create 12.2.0.2 RAC database (DBCA) , upgrade 12.1.0.2 Grid infrastructure to 12.2.0.1 (gridSetup.sh), upgrade 12.1.0.2 RAC database to 12.2.0.1.

He showed us the main different screenshots describing the installation phases and told us that they did not meet a lot of problems during their installation or upgrade  phases. To upgrade the Grid infrastructure, it is important to run the CVU connected as grid user for example :

runcluvfy.sh -src_crshome=/u00/app/12.1.0.2 -dest_crshome=/u00/app/12.2.0.1
 -dest_version=12.2.0.1 -fixupnoexec

Then after you have the possibility to resolve any issues detected using the generated fixup script.

In his opinion, the use of DBUA is sufficiently robust to use for most upgrades, expecially when the upgrade concerns non critical databases, or databases with fast recovery times or databases on virtual machines. By the way he also mentioned that Oracle is still recommending using scripts for upgrades of large or business critical databases.

He encountered some isssues concerning the upgrade phase for Grid Infrastructure. In particular with the memory_target parameter setting because the ASM and GIMR instances use more memory than in 12.1.0.2, he received the classical ORA-00845 error message. He also encountered problems with invalid objects  and had to extend the root file system of his virtual machine.

Then I attended to Franck Pachot’s session about Statistics Gathering, Best Practices  and Statistic Advisor:

uk5

His session described us his findings and recommendations about how to gather statistics, with a lot of technical demonstrations done on the Cloud. A lot of cases were shown, for example volatile tables, preferences for partitioned tables. index gathering statistics.

He showed us the Oracle 12c release 2 statistics Advisor which might be a useful tool, I will check if it is available in Enterprise Manager 13.2.

He finished his by giving us hsi own recommendations: use automatic job for most of the tables, customize the statistics gathering for volatile tables, gather statistics for tables that you load, and important customize the maintenance window for the gathering statistics job.

Finally I wanted to attend at the OEM round table, but unfortunately the session has been canceled :=((

UK6

Well,this was a very interesting week with a lot of exchanges and sharing experiences with other Oracle DBA. hope to come back at UKOUG next year !

 

 

 

Cet article UKOUG 2016 DAY 4 est apparu en premier sur Blog dbi services.

Oracle 12cR2: Statistics Advisor

17 hours 15 min ago

Today at #ukoug_tech16 12:30 in hall 11A I’ll talk about Statistics Gathering Best Practice & 12cR2 Statistics Advisor
Rather than taking the Optimizer Statistics Advisor rules one by one I’ll show the things to take care (some people may call it best practices) when gathering statistics and they mention the Statistics Advisor Rule.
If you need a reference about all rules, you can get it from V$STATS_ADVISOR_RULES

09:41:19 SQL> select to_char(rule_id,99)||' '||description||' ('||name||')' from V$STATS_ADVISOR_RULES where rule_id>0 order by rule_id;
&nbsp
TO_CHAR(RULE_ID,99)||''||DESCRIPTION||'('||NAME||')'
----------------------------------------------------------------------------------------------------

I’ll explain them briefly here.

You should always enable to automatic statistic gathering job. You may manage special cases manually, but do not disable it.
1 Use Auto Job for Statistics Collection (UseAutoJob)
For sure if the job does not complete successfully, the advisor detects the probable cause.
2 Auto Statistics Gather Job should complete successfully (CompleteAutoJob)
Statistics history may save your life in case of a regression. But be sure the retention is not too large and purge occurs or SYSAUX will grow
3 Maintain Statistics History (MaintainStatsHistory)
The faster the statistics gathering run, the more statistics you can gather. Use all your server resources for it.
4 Use Concurrent preference for Statistics Collection (UseConcurrent)
Default options for global preferences are what the optimizer developers think are the best for most cases.
5 Use Default Preference for Stats Collection (UseDefaultPreference)
Humm.. this one was introduced before the decision not to activate SPD by default
(see http://blog.dbi-services.com/oracle-12cr2-optimizer-adaptive-statistics/)
6 SQL Plan Directives should not be disabled (TurnOnSQLPlanDirective)
Setting statistics manually may be used as a workaround but not the general case
7 Avoid Set Statistics Procedures (AvoidSetProcedures)
When you run dbms_stats.gather_…_stats manually, default options are what the optimizer developers think are the best for most cases.
8 Use Default Parameters in Statistics Collection Procedures (UseDefaultParams)
And in those cases, better to run it for a schema so that you are sure to include newly created tables
9 Use gather_schema_stats procedure (UseGatherSchemaStats)
You waste time and ressources if you gather statistics in a addition to what is done with online statistics gathering
10 Avoid inefficient statistics operation sequences (AvoidInefficientStatsOprSeq)
You waste time and ressources if you gather statistics when nothing has changed
11 Avoid unnecessary statistics collection (AvoidUnnecessaryStatsCollection)
You need statistics for all tables
12 Avoid objects with stale or no statistics (AvoidStaleStats)
Statistics gathered before bulk inserts will be immediately stale
13 Do not gather statistics right before bulk DML (GatherStatsAfterBulkDML)
You don’t want the automatic statistics gathering run on a table between a truncate and an insert
14 Statistics for objects with volatile data should be locked (LockVolatileTable)
But let it run for tables with no massive change
15 Statistics for objects with non-volatile should not be locked (UnlockNonVolatileTable)
16 Statistics of dependent objects should be consistent (MaintainStatsConsistency)

Better truncate, make indexes unusable, and insert /*+ append */ than drop and recreate the table (which removes statistics).
17 Avoid drop and recreate object seqauences (AvoidDropRecreate)
Statistics advisor may detect when incremental statistics gathering is me efficient for partitioned tables
18 Statistics should be maintained incrementally when it is beneficial (UseIncremental)
19 Statistics should not be maintained incrementally when it is not beneficial (NotUseIncremental)

Stale statistics may lead to under-estimation because of linear decay
20 Avoid Out of Range Histogram endpoints (AvoidOutOfRange)
Large tables can be scanned in parallel, recommendation is default degree
21 Use Auto Degree for statistics collection (UseAutoDegree)
As we have seen about global preferences, table preference should be default for most cases (rolling invalidation, auto sample size, auto histogram size)
22 Use Default Object Preference for statistics collection (UseDefaultObjectPreference)
And for sure dbms_stats is the way to gather statistics for the optimizer. ANALYZE is deprecated for that since 91
23 Avoid using analyze table commands for statistics collection (AvoidAnalyzeTable)

Those are only my interpretation. 12.2 is new (and cloud first) and I’ve not observed all those recommandations yet. But there are properly described by the advisor.
This is the kind of output we can get:

----------------------------------------------------------------------------------------------------
GENERAL INFORMATION
-------------------------------------------------------------------------------
 
Task Name : MY_TASK
Execution Name : EXEC_52
Created : 12-07-16 11:31:40
Last Modified : 12-07-16 11:32:37
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_52 of task MY_TASK, the Statistics Advisor has 6
finding(s). The findings are related to the following rules: USECONCURRENT,
AVOIDSETPROCEDURES, USEDEFAULTPARAMS, USEGATHERSCHEMASTATS, AVOIDSTALESTATS,
UNLOCKNONVOLATILETABLE. Please refer to the finding section for detailed
information.
-------------------------------------------------------------------------------
FINDINGS
-------------------------------------------------------------------------------
Rule Name: UseConcurrent
Rule Description: Use Concurrent preference for Statistics Collection
Finding: The CONCURRENT preference is not used.
 
Recommendation: Set the CONCURRENT preference.
Example:
dbms_stats.set_global_prefs('CONCURRENT', 'ALL');
Rationale: The system's condition satisfies the use of concurrent statistics
gathering. Using CONCURRENT increases the efficiency of statistics
gathering.
----------------------------------------------------
...

 

Cet article Oracle 12cR2: Statistics Advisor est apparu en premier sur Blog dbi services.

OEL 7 – How to disable IPv6 on Oracle Linux 7

19 hours 28 min ago

In case you are not interested in IPv6, you can use the following HowTo to disable it on Oracle Linux 7. Unless you have something very very special on your System, these 10 Steps should do it.

  1. First of all, check if IPv6 is active at all
  2. Add the disable_ipv6 = 1 entries to the /etc/sysctl.conf file
  3. Disable IPv6 in all /etc/sysconfig/network-scripts/ifcfg-* files, e.g.
  4. Disable IPv6 in /etc/sysconfig/network
  5. Remove the “::1″ line from the /etc/hosts file
  6. Remove the “restrict -6″ line from the /etc/ntp.conf
  7. Add ipv6.disable=1 to the GRUB_CMDLINE_LINUX entry in the /etc/default/grub file
  8. Regenerate a GRUB configuration file and overwrite the existing one
  9. Reboot the server
  10. Confirm if IPV6 is disabled

 

First of all, check if IPv6 is active at all

[root@dbidg01 ~]# /sbin/ip -6 addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 fe80::ad02:9b6a:bf40:5a3a/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 fe80::a00:27ff:feb8:3544/64 scope link
       valid_lft forever preferred_lft forever

 

Add the disable_ipv6 = 1 entries to the /etc/sysctl.conf file

#-- Disable IPv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

 

Disable IPv6 in all /etc/sysconfig/network-scripts/ifcfg-* files, e.g.

cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 | grep IPV6INIT
IPV6INIT=no

 

Disable IPv6 in /etc/sysconfig/network

cat /etc/sysconfig/network | grep NETWORKING_IPV6
NETWORKING_IPV6=no

 

Remove the following line from the /etc/hosts file

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

 

Remove the following line from the /etc/ntp.conf

cat /etc/ntp.conf | egrep ' -6'
restrict -6 default kod nomodify notrap nopeer noquery
restrict -6 ::1

 

Add ipv6.disable=1 to the GRUB_CMDLINE_LINUX entry in the /etc/default/grub file

[root@dbidg01 /]# cat /etc/default/grub | grep GRUB_CMDLINE_LINUX
GRUB_CMDLINE_LINUX="ipv6.disable=1 crashkernel=auto rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet numa=off transparent_hugepage=never"

 

Regenerate a GRUB configuration file and overwrite the existing one

[root@dbidg01 /]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.1.12-61.1.19.el7uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-61.1.19.el7uek.x86_64.img
Found linux image: /boot/vmlinuz-4.1.12-61.1.18.el7uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-61.1.18.el7uek.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-547c48bd53614a2ca2d16909b3c14419
Found initrd image: /boot/initramfs-0-rescue-547c48bd53614a2ca2d16909b3c14419.img
done

 

Reboot the server

init 6

 

Confirm if IPV6 is disabled

[root@dbidg01 ~]# /sbin/ip -6 addr
[root@dbidg01 ~]# lsmod | grep -i v6

 

In case the ip and the lsmod command do not return anything back, then you have successfully disabled IPv6.

Cheers, William

 

 

 

 

Cet article OEL 7 – How to disable IPv6 on Oracle Linux 7 est apparu en premier sur Blog dbi services.

UKOUG 2016 – Third day

Tue, 2016-12-06 11:41

Birmingham

Today, it’s the third day in Birmingham for the UKOUG Tech16 event. We had a good time yesterday evening in English pubs.

Today, I attended several sessions today. Sessions that I was mostly interested in was the “Application Express 5.1 New Features part 1 an part 2.

The 1st session was presented by David Peake from Oracle. He provided main new features that will come with Apex 5.1. He demonstrates new capability for developer usage improvement in terms of productivity. In Apex 5.1, we will be able to define the page designer as we want, customising tabs order, displaying the page designer in different pane layout and page rendering. He also presented the Interactive Grid and quickly created a master detail, detail, detail, detail view. The number of detail is unlimited but he strongly advises to carefully minimise the number of detail.

The 2nd session, APEX 5.1 part 2 was presented by Patrick Wolf. He concentrated his session by presenting improvement made on the Universal Theme that was already rolled out with APEX 5.0. So they continued to improve it in APEX 5.1. In my opinion, the important information is the upgrade of the Universal Theme for any existing 5.0 application. You will have to refresh the theme in order to use the improvement made in APEX 5.1. This has to be done by going in the shared component theme visualisation and click on the refresh theme. This will upgrade the already used UT with 5.1 capabilities. There is lot’s of new capabilities and would wait until the final rollout date to do some tests on my side.

Another session I followed was “APEX, Meet the Rest of the Web – Modern Web Technologies in Your APEX Application”. It was good to know how to create a search like google with auto-complete support depending on the value entered in the search field. Presenter also showed us how to quickly integrate google charts using APEX JSON packages. He also showed us how to integrate google map, showed us how to call Facebook and Twitter API in our application in order to follow twits a.s.o. It’s quite easy to integrate Modern Web Technologies in any APEX Application.

See you tomorrow for the last day in Birmingham.

 

Cet article UKOUG 2016 – Third day est apparu en premier sur Blog dbi services.

UKOUG 2016 DAY 3

Tue, 2016-12-06 11:25

uk3

Today at UKOUG 2016, the Cloud has won against the sun :=)

The first sesssion I attended this morning was animated by Kamil Stawiarski from ORA 600 company: Securing the database againt Unauthorized attacks, but the real title was Oracle Hacking Session.

The session was amazing, as usual with Kamil, no slides , only technical demos :=))

He first showed us that after creating a standard user in an Oracle database with the classical privileges connect, resource and create any index, and using a simple function he created, the standard user could receive the DBA privilege.

The seconf demonstration was about DirtyCow (a computer vulnerability under Linux that allows remote execution of non-privileged code to achieve remote root access on a computer). He showed us how easy it is to get connected root under Linux.

In the last demo he showed us how it is possible to read the data from a particular table directly from the data file, only by using one of his C program and the data_object_id of the table.

He finished his session by asking himself why a lot of money is wasted to protect data, and why it should not be more intelligent to spend less money and to write correct applications with correct privileges.

The second session was more corporate: Oracle database 12cR2, the overview by Dominic Giles from Oracle. He talked us about Oracle 12cR2 on the cloud; What is available now: Exadata Express Cloud Server and Database Cloud Service. Comming soon: Exadata Cloud Machine.

Then he talked about the new features of Oracle database 12cR2:

Performances: The main idea for 12cR2 is: go faster, he gave us some examples: a high compression rate of indexes (subject to licensing option of course) which might result in I/O improvement and significantly space savings.

Security: Oracle 12cR2 introduces online encryption of existing data files. There is also the posiibility of full encryption of internal database structures such as SYSTEM SYSAUX or UNDO. Also a Database Vault simulation mode which defines and tests security protection profiles through application lifecycle.

Developpers: AL32UTF8 is the default character set for databases. Object name for tables or columns can now be 128 bits long.

Manageability: PDB number per container increased from 252 to 4096. The PDB are optimized for RAC. And interesting it will be possible to realize PDB hot clones, PDB refresh and PDB relocate without downtime.

Availability: a lot of improvements for RAC: RAC reader nodes, ASM flex disk groups, Autonomous Health Framework (identifies issues, notifies with corrective actions). For active dataguard, diagnostic tuning and SQL plan advisor will be available on standby side, no user disconnection on failover, high speed block comparaison between primary and standby database. And finally there will be the possibility to use SSL redo transport to be more secure.

Finally, I attended at the last session of the day, but one the most active essentially because of the speaker’s talent and of course the subject: Upgrade to the next generation of Oracle Database; live and uncensored !

He talked us about the different ways to upgrade to 12.1.0.2 or 12.2.0.2 abording subjects like extended support, direct upgrade and DBUA.

A new upgrade script is available : preupgrade.jar executes checks in source environment, generates detailed recommendations, generates also fixup scripts and last but not least is rerunnable :=))

He showed us that the upgrade process is faster and has less downtime, and we have the possibility to run databse upgrade in parallel (by using catctlpl.pl with the -n 8 option for example). It deals with non CDBs and CDBs. During his upgrade from 11.2.0.4 to 12.1.0.2 he interrupted the upgrade process by typing CTRL-C during the upgrade process to 12.1.0.2 … and he proved that the process upgrade is rerunnable by running catctl.pl with the -R option :=)

He is not a great fan of DBUA for multiple reasons : for him it is hard to debug, the parallel option is by default to cpu_count, the progress bar is impredictive and sometimes we have to wait a lot without knowing what’s happening in the source database, we have to be careful with datapatch in 12.1 version. For me the only advantage is the timezone  automatic upgrade by using dbua.

Well this was another exciting day at UKOUG 2016, tomorrow is the last day with other interesting sessions and an OEM round table :=)

 

Cet article UKOUG 2016 DAY 3 est apparu en premier sur Blog dbi services.

From MySQL (Oracle) to Postgres using the EDB Migration Toolkit

Tue, 2016-12-06 11:24

Why should you migrate?
If your current MySQL database does not offer some needed functionnalities according to your business as:
– more security
– more high availibilty options (hot standby)
– Strong Data Warehouse capabilities
If you want to consolidate the number of different instances (Postgres, MySQL, MS-SQL,…)
If you want to reduce administrative costs by using fewer database platforms
Which tool should you use?
the migration Toolkit command-line from EnterpriseDB that can be found below
http://www.enterprisedb.com/products-services-training/products-overview/postgres-plus-solution-pack/migration-toolkit
Why ?
Really easy to use
Which MySQL Objects are supported for the migration?
– Schemas
– Tables
– Constraints
– Indexes
– Table Data
What about partitionned table?
You have to remove the partitions before the migration
mysql> ALTER TABLE Table_name REMOVE PARTITIONING;
My environment:
MySQL: 5.7.14 on Oracle Linux Server 7.1
PostgreSQL: 9.6.1.4 on Oracle Linux Server 7.1
What are the prerequisites?
– download the migration toolkit from EnterpriseDB
Note that it can be only installed by registered users but the registration is free and can be done directly on the EnterpriseDB website.
– Install it and follow the instructions
./edb-migrationtoolkit-50.0.1-2-linux-x64.run
– download the MySQL JDBC driver: mysql-connector-java-5.1.40-bin.jar
http://www.enterprisedb.com/downloads/third-party-jdbc-drivers
– Install the driver by moving it to the right directory:
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64/jre/lib/ext
– To facilitate  the migration, you have to prepare the configuration file: toolkit.properties located in your installation directory
the most important is to associate the right JDBC URL to the SRC_DB_URL parameter
SRC_DB_URL=jdbc:mysql://hostname[:port]/database
Following is the content of the config file
SRC_DB_URL=jdbc:mysql://192.168.56.200:33001/employees
SRC_DB_USER=root
SRC_DB_PASSWORD=manager
TARGET_DB_URL=jdbc:edb://192.168.56.200:5433/employees # the database must be created in Postgres before
TARGET_DB_USER=postgres
TARGET_DB_PASSWORD=manager

In case you get MySQL connection problems (SSL), modify the parameter SRC_DB_URL
SRC_DB_URL=jdbc:mysql://192.168.56.200:33001/employees?autoReconnect=true&useSSL=false
This will disable SSL and also suppress SSL errors.
Before starting the Migration, it is mandatory to create a blank target database in the Postgres instance
What options for the migration ?
-sourcedbtype is mysql
-targetdbtype is enterprisedb
-fetchsize is 1  to avoid  an ‘out of heap space’ error and force the toolkit to load data one row at a time
How to start the migration?
[root@pg_essentials_p1 mtk]# bin/runMTK.sh -sourcedbtype mysql -targetdbtype enterprisedb -fetchSize 1 employees
Running EnterpriseDB Migration Toolkit (Build 50.0.1) ...
Source database connectivity info...
conn =jdbc:mysql://192.168.56.200:33001/employees?autoReconnect=true&useSSL=false
user =root
password=******
Target database connectivity info...
conn =jdbc:edb://192.168.56.200:5433/employees
user =postgres
password=******
Connecting with source MySQL database server...
Connected to MySQL, version '5.7.14-enterprise-commercial-advanced-log'
Connecting with target EDB Postgres database server...
Connected to EnterpriseDB, version '9.6.1.4'
Importing mysql schema employees...
Creating Schema...employees
Creating Tables...
Creating Table: departments
..........................
Created 6 tables.
Loading Table Data in 8 MB batches...
Loading Table: departments ...
[departments] Migrated 9 rows.
..............................
Loading Table: salaries ...
[salaries] Migrated 246480 rows.
................................
[salaries] Migrated 2844047 rows.
[salaries] Table Data Load Summary: Total Time(s): 20.143 Total Rows: 2844047 Total Size(MB): 94.1943359375
Loading Table: titles ...
[titles] Migrated 211577 rows.
[titles] Migrated 419928 rows.
[titles] Migrated 443308 rows.
[titles] Table Data Load Summary: Total Time(s): 3.898 Total Rows: 443308 Total Size(MB): 16.8955078125
Data Load Summary: Total Time (sec): 33.393 Total Rows: 3919015 Total Size(MB): 138.165
Creating Constraint: PRIMARY
Creating Constraint: dept_name
................................
Creating Index: dept_no1
Schema employees imported successfully.
Migration process completed successfully.
Migration logs have been saved to /root/.enterprisedb/migration-toolkit/logs
******************** Migration Summary ********************
Tables: 6 out of 6
Constraints: 11 out of 11
Indexes: 2 out of 2
Total objects: 19
Successful count: 19
Failed count: 0
Invalid count: 0
************************************************************

So as you can see, this migration process is really easy and you can take immediately benefits of all the standard features.

 

Cet article From MySQL (Oracle) to Postgres using the EDB Migration Toolkit est apparu en premier sur Blog dbi services.

UKOUG 2016 – Second day

Mon, 2016-12-05 13:51

IMG_1965

This second day at UKOUG was quite good. I slept well at the Jurys Inn hotel and this morning, I enjoyed one more time a real English breakfast with beans, bacons, eggs and sausages. I like that to be fit over all the day ;)

Today, I attended the general Keynote and several sessions around integration, APEX & Database Development and Database. My colleague, Franck Pachot also presented today and I attended his session “12c Multitenant: Not a Revolution, Just an Evolution”. His session reminds me the article I wrote some years ago about Oracle Multitenant architecture and APEX.

Early in the morning, I followed the “Application Container Cloud Service: Backend Integration Using Node.js”. The presenter described what Node.js is, give javascript framework that can be easily integrated with Node.js such as Express.js to create HTTP server and retrieve Node.js data by creating HTTP server. He also presented the architecture where we can have Node.js hosted in Docker on the cloud.

After that, I attended the session “APEX Version Control & Team Working”. During that session, I learned more on Apex Version Control best practices and which nice commands can be done through SQL cli, apex java utility and so on. I was quite happy learning that for internal development we were not so bad and we already properly control version, make backup of APEX workspace, applications and themes. I now have information to improve our internal works around APEX development activities such as APEX ATAF “Apex Test Automation Framework”

Next session was “Interactive Grids in Application Express 5.1″. This session was a demonstration oriented session in which the presenter showed us new amazing features that will be incorporated in APEX 5.1. Most of the demonstration was based on the sample package application.

The next session was “Real Time Vehicle Tracking with APEX5″. For me it was great to see the power of Apex and the Oracle Database to store and display data in real time through the APEX5 MapViewer. The application uses Oracle Spatial getting data from each vehicle GPS where PL/SQL converts data for geospatial information.

During the last session, “A RESTful MicroService for JSON Processing in the Database” I learned how to execute JavaScript directly from the database. In fact, with Java 8 and the Nashhorn project it’s now possible to execute JavaScript codes from the JVM and so directly in the database avoiding data shipping.

This is all for today and see you tomorrow, we will now take time with my blog reviewer to drink some pints in an English pub.

 

Cet article UKOUG 2016 – Second day est apparu en premier sur Blog dbi services.

SQL Server 2016: distributed availability groups and cascaded replicas

Mon, 2016-12-05 12:28

During the last MVP summit, we had some interesting discussions about availability groups with the SQL Server team and I remember someone asked for managing scenarios like Oracle cascaded destinations and the good news is that SQL Server 2016 already addresses this kind of scenario with distributed availability groups. For example, let’s say you have to manage heavily reporting activity on your application and a solution would be to offload this activity across several secondary read-only replicas. So, a typical architecture as follows:

blog 110 - 0 - initial infra AG

We basically want to achieve high availability on the primary datacenter (DATACENTER1) and to use the secondary datacenter as DR and at the same time to offload reporting activity on secondary replicas. But let’s say you get a low network bandwidth – (WAN classified with ~= 150 / 200 mbps) between your two datacenters which are geographically dispersed from each other. Regarding your current workload against the availability group, we may potentially experience high network traffic when the number of secondary replicas increases on the DR site. Indeed, the number of log blocks to replicate is directly proportional to the number of secondary replicas for the same payload.

I decided to simulate this scenario on my lab environment which reflects the above scenario (2 replicas on the first datacenter and four other replicas on the second datacenter). I used two Lenovo T530 laptop with Hyper-V to simulate the whole environment with a cross datacenter network connection handled by two RRAS servers.

In addition, for a sake of precision, let’s describe the test protocol:

  • I used a script which inserts a bunch of data from the primary replica (~ 900MB of data)
  • I ran the same workload test after adding one asynchronous read-only replica at time on each test up to 4 replicas.
  • I collected performance data from various perfmon counters focused on the availability group network stack (both primary site and DR site)

Here the output of the whole test.

blog 110 - 1 - network usage trend AG

The picture above is pretty clear here. We notice the network bandwidth grows up when adding secondary replicas. In the last test, the network bandwidth reached 400 Mbps (received traffic) on the remote datacenter while that reached for primary replica 600 Mbps (send traffic). Why have we got a difference between network bandwidth consumption between the primary replica and remote datacenter? Well, the answer is simple: network bandwidth consumption on remote datacenter doesn’t include network traffic from the secondary located on the first datacenter for high-availability.

We may also notice the third iteration of the test (1 primary + 1 secondary sync + 2 secondaries async) is showing up a critical point if we have to face a scenario that includes a WAN connection between the two datacenters with a maximum network bandwidth of 200 Mbps. Indeed in this case, the network bandwidth could be quickly saturated by the replication traffic between all the replicas and here probably the first symptoms you may encountered in this case:

 blog 110 - 3 - AG DMV monitoring

A continuous high log send queue size for each concerned secondary replica on the remote datacenter (250 MB on average in my case)…

blog 110 - 3 - AG wait stats

You may minimize the network overhead by isolating the replication traffic to its own network but in some cases if you’re unlucky it will not be enough. This is a situation which may be solved by introducing distributed availability groups and the cascaded destinations principle as shown below:

blog 110 - 4 - distributed infra AG

Distributed availability group feature will permit to offload the replication traffic from the primary to the read-only secondaries by using a replica on the second datacenter. Thus, we are able to reduce drastically the network bandwidth from 4 replicas to only one. In addition, adding one or several other replicas may be considered because this new architecture is more scalable and we will only impact local network bandwidth on the second datacenter.

Here my new lab configuration after applying distributed availability groups on the previous architecture.

  • In the first datacenter, one availability group AdvGrp that includes two replicas in synchronous replication and automatic failover for HA purpose
  • In the second datacenter, one availability AdvGrpDR that includes four replicas enrolled as read-only.
  • One distributed availability group AdvDistGrp which makes the cascade between the two aforementioned availability groups

blog 110 - 5 - distributed cfg AG

Let’s run the same workload test on the new architecture and let’s have a look at the new output:

The log send queue size got back to normal at the primary replica level on the first datacenter by cascading all the previous replication traffic from the primary replica located to the second datacenter (AdvGrpDR availability group).

blog 110 - 6 - AG DMV monitoring distrib

From a wait statistics perspective, we got rid of HADR_DATABASE_FLOW_CONTROL meaning we did not saturated the network link between the 2 datacenters

blog 110 - 7 - AG wait stats distributed

The picture below confirms the replication traffic dropped drastically with this new configuration (150 Mbps vs 400 Mbps from the first architecture).

blog 110 - 8 - perfmon AG distributed

Bottom line

In this blog post I tried to demonstrate using distributed availability groups to cascade the replication traffic to another replica may be a good idea in order to address scenarios which include many secondary replicas on a remote location with a low network bandwidth. However introducing distributed availability groups has a cost in terms of management because we have to deal with an additional layer of complexity. But if the rewards make the effort worthwhile we should consider this kind of architecture.

 

 

 

 

 

Cet article SQL Server 2016: distributed availability groups and cascaded replicas est apparu en premier sur Blog dbi services.

UKOUG 2016 Day 2

Mon, 2016-12-05 12:07

uk2

Today I assisted at a first session about one of my favorite tool: Upgrade to EM 13c now. The session was presented by Phil Gric from Red Stack Tech.

At the begining he described us the most common mistakes while implementing Enterprise Manager:

- EM 13c is an enterprise application

- It is a critical part of your infrastructure

- it is designed to help you

- EM 13c is not a glorified db console

- IT manager should not see EM as a job for DBA

He described us the main pre requisites before to realize an EM 13c upgrade ( for example disable optimizer_adaptive_features). He also talked about isssues such as the upgrade will create users with the sysman password, we should ensure that the repository password policy accept such a password.

There is also an issue while upgrading agent on AIX to 13.2 version. There is a problem securing the agent due to SHA encryption (Metalink Note 1965676.1).

To complete his presentation, he described us the main new features in EM 13c: export and import of incident rules, incident compression, always on monitoring, in emcli more than 300 new verbs and a general functionnality improved, system broadcast , comparaison and drift management.

He finally explained us why for him it is important to regularly upgrade to the last EM13c version: it is easy to upgrade, and the longer you wait, the closer it is to the next upgrade :=))

The second presentation was about the 12c upgrade : the good , the bad and the ugly presented by Niall Litchfield. He talked about his experiences about upgrading to 12c a very huge infrastructure composed of more than 100 servers, with database version from 10.1 to 11.2.0.3, with RAC or single instances.

His first advice was to read the Mike Dietrich documentation (Update, Migrate , Consolidate to 12c), and to have a look at the Oracle recommanded patch list.

A good reason to upgrade is because the support for 11g ends at teh end of the year, and the extended support is expensive.

The good news after this huge upgrade was that there has been no upgrade failures (tens of clusters, hundreds of servers and databases), a performance benchmark showed a 50 % improvement.

The bad and ugly news concern the number of patches. It also concern the JSON bundle patches which require database bundle patches. He also adviced us to turn off the optimizer_adaptive_features (recommanded also to be disabled with EM13c, PeopleSoft and EBS). Finally a last ugly point is the documentation, there is no one place to read the documenation but many. He also recommended to allow significant time for testing the database and the applications after the upgrade to 12c.

Then I assisted at a session talking about Oracle database 12c on Windows animated by Christian Shay of Oracle.

He showed us the database certification on 64-bit Windows. In a short resume Oracle 12..2 is certified on Windows server 2012, Windows Server 2012 R2, Windows 10 and Windows Server 2016, as Oracle 12.1 is certified on the same servers except Windows Server 2016.

In Windows 8 and Windows Server 2012, Microsoft has introduced the Group Managed service Account (GMSA), i.e. a domain level account which can be used by multiple servers in that domain to run their services under this account. A GMSA can be the Oracle Home user for Oracle Database Real Application Clusters (Oracle RAC), single instance, and client installations. It has similarities with the ‘oracle’ user on Linux, as you are able to connect on windows with this user and perform administrative tasks  like create database, install Oracle or upgrade databases.

In Windows 7 and Windows Server 2008 R2, Microsoft introduced virtual accounts. A virtual account can be the Oracle home user for Oracle Database single instance and client installations.

The recommandations are the following: for DB server (single instance) use virtual account to avoid password management (12.2), for 12.1 specify a Windows user account during installation. For RAC DB and Grid infrastructure, use a domain user or group managed service account, for a GMSA you do not need to provide the password for any database operation.

He also talked about large page support for windows. When large page support is enabled, the CU are able to access the Oracle database buffers im RAM more quickly. It will address the buffers in 2 MB page size instead of 4 KB increments.

Large pages can be used in two modes : Regular or Mixed mode. The regular one means all the SGA is attempted to be allocated in large pages. By the way if the amount of large pages is not available the database will not come up. Thats the reason using the mixed mode is perhaps better, if all the SGA cannot be allocated in large pages, the rest of the pages will be allocated with regular pages and the instance will come up.

I finished my UKOUG day by assisting at Franck Pachot’s session talking about 12c Mutltitenant (not a revolution but an evolution). He clearly explained us that we did not have to fear about 12c mutlitenant, from the begining of Oracle there has been a lot of new features a lot people feared, but now they are impelemented and work correctly. By the way the patch upgrade optimization is partially implemented, we will see how 12c multitenant will evolve in the next years.

 

 

 

 

 

Cet article UKOUG 2016 Day 2 est apparu en premier sur Blog dbi services.

UKOUG – Tech16 – Super Sunday

Mon, 2016-12-05 04:38

Screen Shot 2016-12-05 at 11.31.49

 

This year, I had the opportunity to attend the UKOUG 2016 which took place in Birmingham. This event normally begin on Monday but each year, there is a complimentary afternoon with high technical presentation for those who are registered for the Tech16 called super Sunday.

For this first Super Sunday afternoon at UKOUG 2016, I followed 2 sessions and I participated to an hands on lab around the cloud.

The 1st session was very interesting with lost of useful information about APEX and nodeJS.

This session was called NodeJS & Oracle – A Match Made in Heaven and the presenter, Mr Trond Enstad, focused the session to demonstrate the power of using Node.js.

He installed, Node.js, an Oracle Database client and created Node.js config file extracting sar command informations storing them in a remote Oracle Database. After that, he quickly created D3 charts in APEX showing real time monitoring of those stored information. I’m really enthousiaste to do some tests.

The 2nd session “Auto REST Enabling Your Tables, Views, & More With SQL Developer & REST Data Services” from Mr Jeff Smith was also interesting providing useful information about the ORDS product from Oracle.

After these 2 interesting sessions, I followed an Oracle Cloud Platform Hands On Lab called “Cloud Native Stack on Oracle Bare Metal Compute”.

In this labs, we created a Virtual Cloud Network (VCN) in which we were able to create a bare metal instance with Oracle Linux 7.2. Once launched, we installed MongoDB, we setup Node.js and MEAN.js. At the end, we were able to access the mean home page.

It was interesting to see how fast, we were able to provision a Bare Metal instance to install application components on it.

See you tomorrow for other interesting sessions.

 

Cet article UKOUG – Tech16 – Super Sunday est apparu en premier sur Blog dbi services.

Multitenant internals – Summary

Mon, 2016-12-05 02:02

Today at UKOUG TECH16 conference I’m presenting the internals of the new multitenant architecture: 12c Multitenant: Not a Revolution, Just an Evolution. My goal is to show how it works, that metadata links and object links are not blind magic.
Here are the links to the blog posts I’ve published about multitenant internals.

Fichier 05.12.16 07 39 43
The dictionary separation, METADATA LINK and OBJECT LINK (now called DATA LINK): :
http://blog.dbi-services.com/multitenant-dictionary-what-is-stored-only-in-cdbroot/
http://blog.dbi-services.com/oracle-12c-cdb-metadata-a-object-links-internals/
http://blog.dbi-services.com/oracle-multitenant-dictionary-metadata-links/
http://blog.dbi-services.com/oracle-multitenant-dictionary-object-links/
http://blog.dbi-services.com/multitenant-internals-how-object-links-are-parsedexecuted/
http://blog.dbi-services.com/multitenant-internals-object-links-on-fixed-tables/
An exemple with the AWR views:
http://blog.dbi-services.com/12c-multitenant-internals-awr-tables-and-views/
How the upgrades should work:
http://blog.dbi-services.com/oracle-multitenant-dictionary-upgrade/
What about shared pool rowcache and library cache:
http://blog.dbi-services.com/oracle-multitenant-dictionary-rowcache/
http://blog.dbi-services.com/12c-multitenant-cursor-sharing-in-cdb/
And how to see when session switches to CDB$ROOT:
http://blog.dbi-services.com/oracle-12cr2-multitenant-containers-in-sql_trace/

If you are in Birmingham, I’m speaking on Monday and Wednesday.

CaptureUKOUGFeaturedSpeaker

 

Cet article Multitenant internals – Summary est apparu en premier sur Blog dbi services.

UKOUG Super Sunday

Sun, 2016-12-04 16:30

uk1

Today at the UKOUG Super Sunday in Birmingham, I had the opportunity to assist at interesting conferences.

The first presentation was about Oracle RAC internals and its new features in version 12.2.0.1 on Oracle Cloud. The main new features concern the cache fusion, the undo header hash table, the leaf nodes, and the hang manager.

In 12c release 2 in a RAC environment, the cache fusion automatically chooses an optimal path; the cache fusion collects and maintains statistics on the private network, and will use this information to find the optimal path network or disk to serve blocks. We can consider that flash storage will provide better acces time to data than the private network in case of high load.

In order to reduce remote lookups, each instance maintain a hash table of recent transactions (active and commited). So the Undo Header Hash table will improve the scalibility by eliminating remote lookups.

Flex Cluster and leaf nodes were implemented in 12cR1. With 12cR2, it is now possible to run read-only workload on instances running on leaf nodes.

Hang Manager has been introduced with 12cR2. It determines sessions holding resources on which sessions are waiting.  Hang Manager has the possibility to detect hangs across layers.

The second was about the use of strace, perf and gdb. This was a very funny presentation with no slides, only technical demos. It was talking on how to use strace, perf or gdb without being an expert. The speaker showed us the different analysis we can realize with strace gdb or perf in case we realize a sql query over a table in a file system tablespace or an ASM tablespace.

Using those tools allowed us to understand the mechanism of physical read and asynchronous I/O, and showed us the differences between asynchronous I/O and direct path read between ASM and file system.

It showed us that the use of strace and gdb is very simple but not recommended in a production environment.

The last session was talking about dba_feature_usage_statistics, and the speaker describes us the components behind the scene.

This view  as its name indicates it displays information about database feature usage statistics. The view gives an overview of each option pack taht have been used in the database and are currently in use. It pprovides also information when the product was first used and when it was used for the last time.

It is not very easy to find information in the Oracle documentation about how this view is populated. But the speaker gave us important information about wrl$_dbu_usage_sample, wrl$_dbu_feature_usage and wrl$_dbu_feature_metadata which are important for the dba_feature_usage_statistics view.

He also showed us a method to refresh manually the view dba_feature_usage_statistics.

Tomorrow another day of interesting sessions is waiting for us !

 

 

Cet article UKOUG Super Sunday est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 6 – Server programming

Sat, 2016-12-03 05:43

Today we’ll continue this series with another topic: What does PostgreSQL provide when it comes to server programming, that is: Writing functions and triggers to support your application? In Oracle you can either use PL/SQL or Java, in MariaDB you can use stored procedures written in SQL, MS SQL Server provides Transact SQL and with DB2 you can write stored procedures in a host language or SQL.

We’ll use the same sample data as in the last post:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1

So, what can you do? To begin with you can create functions containing pure SQL commands. These are called “query language functions”. You can for example do things like this (although this function is not very useful as can you do the same by just selecting the whole table):

CREATE FUNCTION select_all_from_t1() RETURNS SETOF t1 AS '
  SELECT * 
    FROM t1;
' LANGUAGE SQL;

There are two important points here: The “LANGUAGE” part which means that the function is written in pure SQL. The keyword “SETOF” which means that we want to return a whole set of the rows of t1. Once the function is created you can use it in SQL:

(postgres@[local]:5439) [ddl] > select select_all_from_t1();
 select_all_from_t1 
--------------------
 (1,1)
 (2,2)
 (3,3)
...

When you want to do something where it does not make sense to return anything you can do it by using the “VOID” keyword:

CREATE FUNCTION update_t1() RETURNS VOID AS '
  UPDATE t1
     SET a = 5
   WHERE a < 10
' LANGUAGE SQL;

When you execute this you do not get a result:

(postgres@[local]:5439) [ddl] > select update_t1();
 update_t1 
-----------
 NULL
(1 row)
(postgres@[local]:5439) [ddl] > select count(*) from t1 where a = 5;
 count 
-------
     9
(1 row)

What about parameters? You can do this as well:

CREATE FUNCTION do_the_math(anumber1 numeric, anumber2 numeric ) RETURNS numeric AS '
  SELECT do_the_math.anumber1 * do_the_math.anumber2;
' LANGUAGE SQL;

Execute it:

(postgres@[local]:5439) [ddl] > select do_the_math(1.1,1.2);
 do_the_math 
-------------
        1.32

Another great feature is that you can have a variable/dynamic amount of input parameters when you specify the input parameter as an array:

CREATE FUNCTION dynamic_input(VARIADIC arr numeric[]) RETURNS int AS $$
    SELECT array_length($1,1);
$$ LANGUAGE SQL;

(postgres@[local]:5439) [ddl] > select dynamic_input( 1,2,3,4 );
 dynamic_input 
---------------
             4

So far for the SQL functions. What can you do when you need more than SQL? Then you can use the so called “procedural language functions”. One of these which is available by default is PL/pgSQL:

(postgres@[local]:5439) [ddl] > \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(1 row)

By using PL/pgSQL you can add control structures around your SQL very much as you can do it in PL/SQL (except that you cannot create packages).

CREATE FUNCTION f1(int,int) RETURNS text AS $$
DECLARE
    t_row t1%ROWTYPE;
    result text;
BEGIN
    SELECT * 
      INTO t_row
      FROM t1
     WHERE a = 99;
    IF t_row.b > 0
    THEN
        result := 'aaaaaa';
    ELSE
        result := 'bbbbbb';
    END IF;
    RETURN result;
END;
$$ LANGUAGE plpgsql;
(postgres@[local]:5439) [ddl] > select f1(1,1);
   f1   
--------
 aaaaaa

You can also use anonymous blocks:

(postgres@[local]:5439) [ddl] > DO $$
BEGIN
  FOR i IN 1..10
  LOOP
    raise notice 'blubb';
  END LOOP;
END$$ LANGUAGE plpgsql;
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
DO

Of course there is more than IF-THEN-ELSE which is documented here.

So by now we know two options to write functions in PostgreSQL. Is there more we can do? Of course: You prefer to write your functions in Perl?

(postgres@[local]:5439) [ddl] > create extension plperl;
CREATE EXTENSION
(postgres@[local]:5439) [ddl] > \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plperl  | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language


CREATE FUNCTION perl_max (integer, integer) RETURNS integer AS $$
    my ($x, $y) = @_;
    if (not defined $x) {
        return undef if not defined $y;
        return $y;
    }
    return $x if not defined $y;
    return $x if $x > $y;
    return $y;
$$ LANGUAGE plperl;

(postgres@[local]:5439) [ddl] > select perl_max(1,2);
 perl_max 
----------
        2

You prefer python?

(postgres@[local]:5439) [ddl] > create extension plpythonu;
CREATE EXTENSION
Time: 327.434 ms
(postgres@[local]:5439) [ddl] > \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description                
-----------+---------+------------+------------------------------------------
 plperl    | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
 plpythonu | 1.0     | pg_catalog | PL/PythonU untrusted procedural language

CREATE FUNCTION pymax (a integer, b integer)
  RETURNS integer
AS $$
  if a > b:
    return a
  return b
$$ LANGUAGE plpythonu;

(postgres@[local]:5439) [ddl] > select pymax(1,1);
 pymax 
-------
     1

… or better TcL?

(postgres@[local]:5439) [ddl] > create extension pltclu;
CREATE EXTENSION
Time: 382.982 ms
(postgres@[local]:5439) [ddl] > \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description                
-----------+---------+------------+------------------------------------------
 plperl    | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
 plpythonu | 1.0     | pg_catalog | PL/PythonU untrusted procedural language
 pltclu    | 1.0     | pg_catalog | PL/TclU untrusted procedural language

And these are only the default extensions. There is much more you can do:

  • Java
  • PHP
  • R
  • Ruby
  • Scheme
  • Unix shell

You see: PostgreSQL gives you the maximum flexibility :)

 

Cet article Can I do it with PostgreSQL? – 6 – Server programming est apparu en premier sur Blog dbi services.

Histograms on character strings between 11.2.0.3 and 11.2.0.4

Wed, 2016-11-30 15:33

In short, when have statistics gathered in 11.2.0.3 but the query is now running with the 11.2.0.4 optimizer, you may have wrong cardinality estimation on histograms, leading to sub-optimal plans.

I had a table with a flag that has two values ‘Y’ and ‘N’ with even distribution between them. It’s a good case for frequency histograms. I had frequency histograms and expected exact cardinality estimation for a WHERE FLAG=’Y’ predicate. But that was not the case: very low estimation leading to very bad execution plan. Because the cardinality estimation was far from what we have in histograms and far from what we would have without histograms, I checked a 10053 trace and this is what I’ve find:
Using prorated density: 0.004557 of col #97 as selectivity of out-of-range/non-existent value pred

This is linear decay because of out-of-range predicate (I’ll show that next Wednesday at UKOUG TECH16) but I don’t expect an out-of-range condition when I provide one of the two values that are in the frequency histogram.

Here is my testcase


SQL> create table DEMO ( flag char);
Table created.
 
SQL> select count(*) from DEMO where flag='Y';
 
COUNT(*)
----------
0
 
SQL> insert into DEMO select 'Y' from xmltable('1 to 100000');
100000 rows created.
 
SQL> insert into DEMO select 'N' from xmltable('1 to 1000');
1000 rows created.
 
SQL> select flag,to_char(ascii(flag),'XX'),count(*) from DEMO group by flag;
 
FLAG TO_CHAR(ASCII(FLAG),'XX') COUNT(*)
---- ------------------------- --------
Y 59 100000
N 4E 1000

100000 rows with ‘Y’ and 1000 rows with ‘N’.

11.2.0.3

I gather statistics in 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.3';
Session altered.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And run a query looking for rows where flag is ‘Y’


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

The estimate is perfect thanks to the histograms.

11.2.0.4

Now the CBO will running in 11.2.0.4 but statistics have not been gathered since 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.4';
Session altered.
 
SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 500 |
-------------------------------------------

Bad estimation here. Were those 500 come from?
It’s not from the histogram that knows that 100K rows have value ‘Y’
Without histograms the estimation would be based on 2 distinct values among 101000 so that would be 50500.
CBO trace says:

Access path analysis for DEMO
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for DEMO[DEMO] SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
 
kkecdn: Single Table Predicate:"DEMO"."FLAG"='Y'
Column (#1):
NewDensity:0.004950, OldDensity:0.000005 BktCnt:101000.000000, PopBktCnt:101000.000000, PopValCnt:2, NDV:2
Column (#1): FLAG(CHARACTER)
AvgLen: 2 NDV: 2 Nulls: 0 Density: 0.004950
Histogram: Freq #Bkts: 2 UncompBkts: 101000 EndPtVals: 2 ActualVal: no
Using density: 0.004950 of col #1 as selectivity of pred having unreasonably low value
Table: DEMO Alias: DEMO
Card: Original: 101000.000000 Rounded: 500 Computed: 500.000000 Non Adjusted: 500.000000

Predicate having unreasonably low value…

Time to look at the histograms.

ENDPOINT_VALUE

You can get the histogram entries. For CHAR datatype (my case here) the endpoint value is hashed through their ASCII representation put in decimal. Or at least with first characters.

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 405650737316592000000000000000000000 4E20202020203A7BB119D5F6000000 N :{��
101000 462766002760475000000000000000000000 59202020202034D998FF0B5AE00000 Y 4٘�
Z�

Looks good. I can see the ‘N’ and ‘Y’ values here. But obviously the CBO sees that as different than ‘Y’.

Let’s gather statistics again (I’m now with 11.2.0.4 CBO):

SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And look at the difference:

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 404999154965717000000000000000000000 4E0000000000181F436C7BBB200000 NCl{�
101000 462114420409600000000000000000000000 590000000000127D2B51B120000000 Y}+Q�

In 11.2.0.3 the characters (‘Y’ is ASCII 0x59) were padded with spaces (ASCII 0x20). But In 11.2.0.4 they are padded with nulls (ASCII 0x00).
This is the reason why it was considered different. The ENDPOINT_VALUE for ‘Y’ calculated by the 11.2.0.4 version of the CDB is different from the one calculated by the 11.2.0.3 dbms_stats.

Now, the estimation is good again:


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

char_value

The ENDPOINT_VALUE function has been described by Jonathan Lewis in Cost-Based Oracle Fundamentals and the script to encode a character string is downloadable rom http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip. I use it (as an inline function because I’m running that in 12c) to show the values calculated from ‘Y’ and ‘N’ as well as the values calculated from same strings right-padded with spaces:


SQL> with function char_value(i_vc varchar2) return number
-- function coming from Jonathan Lewis char_value.sql http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip
is
m_vc varchar2(15) := substr(rpad(i_vc,15,chr(0)),1,15);
m_n number := 0;
begin
for i in 1..15 loop
m_n := m_n + power(256,15-i) * ascii(substr(m_vc,i,1));
end loop;
m_n := round(m_n, -21);
return m_n;
end;
select flag,char_value(flag),cast(flag as char(32)),char_value(cast(flag as char(32))),count(*) from DEMO.DEMO group by flag
/
 
FLAG CHAR_VALUE(FLAG) CAST(FLAGASCHAR(32)) CHAR_VALUE(CAST(FLAGASCHAR(32))) COUNT(*)
---- ------------------------------------ -------------------------------- ------------------------------------ --------
Y 462114420409600000000000000000000000 Y 462766002760475000000000000000000000 100000
N 404999154965717000000000000000000000 N 405650737316592000000000000000000000 1000

Comparing the calculated values with the ENDPOINT_VALUE I had above, we can see that the function has not changed but the input string was padded with spaces before which is not the case anymore.

So what?

Be careful when upgrading from <= 11.2.0.3 to higher version. You should re-gather the statistics (but then read http://blog.dbi-services.com/a-migration-pitfall-with-all-column-size-auto/ as there’s another pitfall). Don't set optimizer_features_enable to lower version as the scope is very wide and many things may be different. It is always better to take the time to focus on the features you want to disable. And never give up. The weirdest situations can be troubleshooted. Oracle Database has lot of ways to understand what happens, from the debugging tools provided with the software, and from the excellent literature about it.

 

Cet article Histograms on character strings between 11.2.0.3 and 11.2.0.4 est apparu en premier sur Blog dbi services.

Switchover and Failover with Dbvisit 8

Wed, 2016-11-30 12:19

In this blog we will talk about how to do a switchover and how to do a failover. We suppose that dbvisit is already installed and that a standby database is already configured. Our instance is named DBPRIMA.

Switchover
SWITCHOVER is the fact to change database role. The primary becomes the standby and the standby becomes the primary. This can be useful for many reasons.
Before performing a switchover, we have first to send archived logs if any not already sent on the primary server. For this we use the magic command dbvctl

[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 9201)
dbvctl started on dbvisit2: Tue Nov 29 14:46:15 2016
=============================================================
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 30 (1_30_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:46:17 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

After let’s apply all archived logs on the standby

[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 9201)
dbvctl started on dbvisit2: Tue Nov 29 14:46:15 2016
=============================================================
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 30 (1_30_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:46:17 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

Note that the commands above can be scheduled on both servers using crontab for example on linux system
On the Primary Server:

00,10,20,30,40,50 * * * * cd /u01/app/dbvisit/standby; ./dbvctl -d DBPRIMA >>/dev/null 2>&1

On the Standby Server:

00,10,20,30,40,50 * * * * cd /u01/app/dbvisit/standby; ./dbvctl -d DBPRIMA >>/dev/null 2>&1

After sending and applying archived logs, we can check the status of the synchronization

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -i
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4420)
dbvctl started on dbvisit1: Tue Nov 29 14:26:11 2016
=============================================================
Dbvisit Standby log gap report for DBPRIMA thread 1 at 201611291426:
-------------------------------------------------------------
Destination database on dbvisit2 is at sequence: 29.
Source database on dbvisit1 is at log sequence: 30.
Source database on dbvisit1 is at archived log sequence: 29.
Dbvisit Standby last transfer log sequence: 29.
Dbvisit Standby last transfer at: 2016-11-29 14:24:16.
Archive log gap for DBPRIMA: 0.
Transfer log gap for DBPRIMA: 0.
Standby database time lag (DAYS-HH:MI:SS): +0:01:37.
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:26:12 2016
=============================================================
[oracle@dbvisit1 ~]$

If all is ok, we can now we can go for the SWITCHOVER. We can do it either by the graphical console or by command line. We are using the command line method. The command should be run only on the primary server.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 5081)
dbvctl started on dbvisit1: Tue Nov 29 14:47:32 2016
=============================================================
>>> Starting Switchover between dbvisit1 and dbvisit2
Running pre-checks ... done
Pre processing ... done
Processing primary ... done
Processing standby ... done
Converting standby ... done
Converting primary ... done
Completing ... done
Synchronizing ... done
Post processing ... done
>>> Graceful switchover completed.
Primary Database Server: dbvisit2
Standby Database Server: dbvisit1
>>> Dbvisit Standby can be run as per normal:
dbvctl -d DBPRIMA
PID:5081
TRACE:5081_dbvctl_switchover_DBPRIMA_201611291447.trc
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:50:23 2016
=============================================================
[oracle@dbvisit1 ~]$

Failover
The FAILOVER process happens when the primary database is no longer working. In this case the standby should be activated and will become the primary one. This FAILOVER process is NOT reversible unlike SWITCHOVER process.
A good practice before activating the standby database is to run a quick test to ensure that the standby database is in a consistent state (datafile headers and controlfile is in sync) and ready to be activated. This can be done by opening the standby database read-only.

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o read
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2542)
dbvctl started on dbvisit2: Wed Nov 30 09:40:50 2016
=============================================================
Open Standby Database DBPRIMA in READ ONLY mode...
Standby Database DBPRIMA opened in READ ONLY mode.
Log files cannot be applied to Database while in READ ONLY mode.
Database tempfile(s) may need to be added to this database.
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:40:55 2016
=============================================================
[oracle@dbvisit2 ~]$

As we don’t get any error and that we know we can open the standby read-only, let’s start it back into recovery mode (mount state).

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o restart
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2667)
dbvctl started on dbvisit2: Wed Nov 30 09:45:45 2016
=============================================================
Stopping instance DBPRIMA...
Standby Instance DBPRIMA shutdown successfully on dbvisit2.
Starting instance DBPRIMA...
Standby Instance DBPRIMA started
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:45:57 2016
=============================================================
[oracle@dbvisit2 ~]$

And proceed with the FAILOVER. Dbvisit will ask to confirm. We can use the –force option to avoid this.

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o activate
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2796)
dbvctl started on dbvisit2: Wed Nov 30 09:47:12 2016
=============================================================
Activating means this database will become a Primary Database.
It will no longer be a Standby Database for DBPRIMA on dbvisit1.
Activation cannot be reversed.
=>Activate Standby Database on dbvisit2? [No]: yes
Are you sure? [No]: yes
>>> Activating now...
File /u01/app/dbvisit/standby/conf/dbv_DBPRIMA.env copied to
/u01/app/dbvisit/standby/conf/dbv_DBPRIMA.env.201611300947.
>>> Activation complete. Please ensure a backup of this Database is made
>>> Old archives from before the activation should be removed to avoid mix-up between new
and old archive logs
>>> If the Dbvisit Standby process is to be reversed, then database on dbvisit1 will need
to be rebuilt as a Standby Database
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:48:07 2016
=============================================================
[oracle@dbvisit2 ~]$

In this article we have seen how easy it is to do a switchover or failover with Dbvisit 8 .

 

Cet article Switchover and Failover with Dbvisit 8 est apparu en premier sur Blog dbi services.

Creating standby database with Dbvisit 8

Wed, 2016-11-30 12:13

The new version of Dbvisit standby is now released. In this blog we are going to see how to install dbvisit 8 and how to create a standby database.
For the installation after downloading the software, just uncompress it and then run the install-dbvisit command under the installer directory. The installation process is divided in 2 parts.
1- Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent) –On both servers
2- Dbvserver console installation. Just note that the console now needs to be installed only on one server.

We will not show all outputs, more info for the installation can be found in my precedent blog about upgrading dbvisit 7 to dbvisit 8

[oracle@dbvisit1 installer]$ pwd
/home/oracle/dbvisit/dbvisit/dbvisit/installer
[oracle@dbvisit1 installer]$ ls
install-dbvisit
[oracle@dbvisit1 installer]$
[oracle@dbvisit1 installer]$ ./install-dbvisit
-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------
It is recommended to make a backup of our current Dbvisit software
location (Dbvisit Base location) for rollback purposes.
Installer Directory /home/oracle/dbvisit/dbvisit/dbvisit
>>> Please specify the Dbvisit installation directory (Dbvisit Base).
The various Dbvisit products and components - such as Dbvisit Standby,
Dbvisit Dbvnet will be installed in the appropriate subdirectories of
this path.
Enter a custom value or press ENTER to accept default [/usr/dbvisit]:
> /u01/app/dbvisit
DBVISIT_BASE = /u01/app/dbvisit
-----------------------------------------------------------
Component Installer Version Installed Version
-----------------------------------------------------------
standby 8.0.04.18184 not installed
dbvnet 2.0.04.18184 not installed
dbvagent 2.0.04.18184 not installed
dbvserver 2.0.04.18184 not installed
-----------------------------------------------------------
What action would you like to perform?
1 - Install component(s)
2 - Uninstall component(s)
3 - Terminate
Your choice: 1
Which component do you want to install?
1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
2 - Dbvisit Standby Core (Command Line Interface)
3 - Dbvnet (Dbvisit Network Communication)
4 - Dbvagent (Dbvisit Agent)
5 - Dbvserver (Dbvisit Central Console)
6 - Exit Installer
Your choice: 1

And then follow instructions.
At the end of the proceesus we can start the different components: the dbvagent and the dbvserver


[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvagent/dbvagent -d start


[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvnet/dbvnet -d start


[oracle@dbvisit1 installer]$ ps -ef|egrep 'dbvagent|dbvnet' | grep -v grep
oracle 4064 1 0 10:46 ? 00:00:00 /u01/app/dbvisit/dbvagent/dbvagent -d start
oracle 4140 1 0 10:47 ? 00:00:00 /u01/app/dbvisit/dbvnet/dbvnet -d start
[oracle@dbvisit1 installer]$

After core components installation, we can proceed with Dbvisit Standby Central console installation (dbvserver). Just launch again install-dbvisit command and follow instructions.

[oracle@dbvisit1 installer]$ ./install-dbvisit
-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------
It is recommended to make a backup of our current Dbvisit software
location (Dbvisit Base location) for rollback purposes.
Installer Directory /home/oracle/dbvisit/dbvisit/dbvisit
.....
.....
What action would you like to perform?
1 - Install component(s)
2 - Uninstall component(s)
3 - Terminate
Your choice: 1
Which component do you want to install?
1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
2 - Dbvisit Standby Core (Command Line Interface)
3 - Dbvnet (Dbvisit Network Communication)
4 - Dbvagent (Dbvisit Agent)
5 - Dbvserver (Dbvisit Central Console)
6 - Exit Installer
Your choice: 5

At the end of the installation, We can now start dbserver

[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvserver/dbvserver -d start


[root@dbvisit1 ~]# netstat -taupen | grep dbvser
tcp 0 0 0.0.0.0:4433 0.0.0.0:* LISTEN 1000 37848 5348/dbvserver
[root@dbvisit1 ~]#

We should be able to connect to dbvserver console on port 4433, with admin user. The default password is admin
dbvserverconsole1
dbvserverconsole2
For example using MANAGE USERS, we can change default admin password.
Now that dbvisit standby is installed on both server, the first thing is to test connectivity. For this we will use the command dbvctl which is the main command of dbvisit 8. In dbvisit documentation you can find The first command you need to be fimiliar with is the “dbvctl -h” command which will display the syntax and usage options for the Dbvisit Standby command line interface. and it’s really true. The command dbvctl -f system_readiness will be used.

[oracle@dbvisit1 installer]$ /u01/app/dbvisit/standby/dbvctl -f system_readiness
Please supply the following information to complete the test.
Default values are in [].
Enter Dbvisit Standby location on local server: [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Enter the name of the remote server: []: dbvisit2
Your input: dbvisit2
Is this correct? <Yes/No> [Yes]:
Enter Dbvisit Standby location on remote server: [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Enter the name of a file to transfer relative to local install directory
/u01/app/dbvisit: [standby/doc/README.txt]:
Your input: standby/doc/README.txt
Is this correct? <Yes/No> [Yes]:
Choose copy method:
1) /u01/app/dbvisit/dbvnet/dbvnet
2) /usr/bin/scp
Please enter choice [1] : 1
Is this correct? <Yes/No> [Yes]:
Enter port for method /u01/app/dbvisit/dbvnet/dbvnet: [7890]:
Your input: 7890
Is this correct? <Yes/No> [Yes]:
-------------------------------------------------------------
Testing the network connection between local server and remote server dbvisit2.
-------------------------------------------------------------
Settings
========
Remote server =dbvisit2
Dbvisit Standby location on local server =/u01/app/dbvisit
Dbvisit Standby location on remote server =/u01/app/dbvisit
Test file to copy =/u01/app/dbvisit/standby/doc/README.txt
Transfer method =/u01/app/dbvisit/dbvnet/dbvnet
port =7890
-------------------------------------------------------------
Checking network connection by copying file to remote server dbvisit2...
-------------------------------------------------------------
Trace file /u01/app/dbvisit/standby/trace/5589_dbvctl_system_readiness_201611291139.trc
File copied successfully. Network connection between local and dbvisit2
correctly configured.
[oracle@dbvisit1 installer]$

If everything is fine with the installation, now we can create a standby database using dbvctl. The configuration we use is the following:
Primary Server : dbvisit1
Standby Server: dbvisit2
Database Instance : DBPRIMA ( We suppose that database is in archive mode and in force logging mode )
First Let’s create Dbvisit Standby Database Configuration (DDC) file.The name of the DDC is generally the name of the database instance. The command dbvctl -o setup is used.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -o setup
========================================================
Dbvisit Standby Database Technology (8.0.04.18184)
=========================================================
=>dbvctl only needs to be run on the primary server.
Is this the primary server? <Yes/No> [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:
DDC
===
1) Create New DDC
2) Cancel
Please enter choice [] : 1
Is this correct? <Yes/No> [Yes]:
END USER LICENSE AGREEMENT
PLEASE READ THIS END USER LICENSE AGREEMENT (AGREEMENT) CAREFULLY BEFORE
DOWNLOADING THE LICENSED PRODUCT. BY CLICKING I AGREE BELOW, YOU ARE
REPRESENTING THAT YOU HAVE THE RIGHT AND AUTHORITY TO LEGALLY BIND YOURSELF
…….
…….
of it on future occasions. Headings are for reference purposes only and
have no substantive effect.
Continue ? <Yes/No> [No]: yes
=========================================================
Dbvisit Standby setup begins.
=========================================================
The following Oracle instance(s) have been found on this server:
SID ORACLE_HOME
=== ===========
1) DBPRIMA /u01/app/oracle/product/12.1.0.2/dbhome_1
2) Enter own ORACLE_SID and ORACLE_HOME
Please enter choice [] : 1
Is this correct? <Yes/No> [Yes]:
=>ORACLE_SID will be: DBPRIMA
=>ORACLE_HOME will be: /u01/app/oracle/product/12.1.0.2/dbhome_1
------------------------------------------------------------------------------
Enter the primary server name.
Note: The hostname is case sensitive on UNIX based systems.
=>SOURCE is [dbvisit1]:
Your input: dbvisit1
Is this correct? <Yes/No> [Yes]:
Choice is dbvisit1
------------------------------------------------------------------------------
Please enter a filesystem directory that Dbvisit Standby use to store (archive) log files. This directory
is not the same as the database recovery area or archive destinations and should not be located in these
areas.
....
Please ensure that this directory exists on
=>ARCHSOURCE is [/u01/app/oracle/dbvisit_arch/DBPRIMA]: /u01/app/archivedbvisit/DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/archivedbvisit/DBPRIMA
------------------------------------------------------------------------------
Do you want to use SSH to connect to the standby server? Note that if you are using SSH, passwordless SSH
authentication between the hosts must already be configured. By default Dbvnet will be used.
=>USE_SSH is [N]:
Your input: N
Is this correct? <Yes/No> [Yes]:
Choice is N
------------------------------------------------------------------------------
Enter the standby database hostname.
If the standby database will be Oracle RAC enabled:
Enter the the Virtual Hostname (linked to a Virtual IP) for standby database. This virtual hostname is
....
For non-RAC configurations specify the standby database server name here.
=>DESTINATION is []: dbvisit2
Your input: dbvisit2
Is this correct? <Yes/No> [Yes]:
Choice is dbvisit2
------------------------------------------------------------------------------
Specify the DBVNET or SSH port number on the standby server. The default value supplied is the dbvnet port
7890. If you specified the use of SSH, please specify the SSH port here.
=>NETPORT is [7890]:
Your input: 7890
Is this correct? <Yes/No> [Yes]:
Choice is 7890
------------------------------------------------------------------------------
Enter Dbvisit Standby installation directory on the standby server
=>DBVISIT_BASE_DR is [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/dbvisit
------------------------------------------------------------------------------
Enter ORACLE_HOME directory on the standby server
=>ORACLE_HOME_DR is [/u01/app/oracle/product/12.1.0.2/dbhome_1]:
Your input: /u01/app/oracle/product/12.1.0.2/dbhome_1
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/oracle/product/12.1.0.2/dbhome_1
------------------------------------------------------------------------------
Enter DB_UNIQUE_NAME on the standby server
=>DB_UNIQUE_NAME_DR is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Please enter the directory where Dbvisit Standby will transfer the (archive) log files to on standby
server.
...
Please ensure that this directory exists on the standby server
=>ARCHDEST is [/u01/app/archivedbvisit/DBPRIMA]:
Your input: /u01/app/archivedbvisit/DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/archivedbvisit/DBPRIMA
------------------------------------------------------------------------------
Enter ORACLE_SID on the standby server
=>ORACLE_SID_DR is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Enter ASM instance name on the standby server, if your standby is using ASM. If you are not using ASM on
the standby leave the value blank.
=>ORACLE_SID_ASM_DR is []:
Is this correct? <Yes/No> [Yes]:
Choice is null
------------------------------------------------------------------------------
Please specify the name of the Dbvisit Database configuration (DDC) file.
The DDC file is a plain text file that contains all the Dbvisit Standby settings.
=>ENV_FILE is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Below are the list of configuration variables provided during the setup process:
Configuration Variable Value Provided
====================== ==============
ORACLE_SID DBPRIMA
ORACLE_HOME /u01/app/oracle/product/12.1.0.2/dbhome_1
SOURCE dbvisit1
ARCHSOURCE /u01/app/archivedbvisit/DBPRIMA
RAC_DR N
USE_SSH N
DESTINATION dbvisit2
NETPORT 7890
DBVISIT_BASE_DR /u01/app/dbvisit
ORACLE_HOME_DR /u01/app/oracle/product/12.1.0.2/dbhome_1
DB_UNIQUE_NAME_DR DBPRIMA
ARCHDEST /u01/app/archivedbvisit/DBPRIMA
ORACLE_SID_DR DBPRIMA
ENV_FILE DBPRIMA
Are these variables correct? <Yes/No> [Yes]:
>>> Dbvisit Database configuration (DDC) file DBPRIMA created.
>>> Dbvisit Database repository (DDR) DBPRIMA created.
Repository Version 8.0
Software Version 8.0
Repository Status VALID
PID:2330
TRACE:dbvisit_install.log
[oracle@dbvisit1 ~]$

The DDC file we created will be used to create the standby database. But before we must enter the license key, otherwise we will have an error.Following command is used dbvctl -d DDC -l License_Key

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -l 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 3835)
dbvctl started on dbvisit1: Tue Nov 29 14:12:08 2016
=============================================================
=>Update with license key: 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq? <Yes/No> [Yes]:
>>> Dbvisit Standby License
License Key : 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq
customer_number : 1
dbname :
expiry_date : 2016-12-29
product_id : 8
sequence : 1
status : VALID
updated : YES
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:12:17 2016
=============================================================
[oracle@dbvisit1 ~]$

And then we can proceed with the standby database creation using the command dbvctl -d DDC –csd

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA --csd
>>> Running pre-checks please wait... done
What would you like to do:
1 - Create standby database (and optionally save settings in template)
2 - Help
3 - Terminate processing
Please enter your choice [1]: 1
-------------------------------------------------------------------------------
Answer Yes to transfer the datafiles directly to the standby server (not
recommended for large database or slow networks)
Note tablespaces will be placed in backup mode for direct copy.
Answer No (recommended) to create an RMAN backup of primary datafiles in a
temporary location and transfer the backup to the standby server
=>Do you want to copy database files directly to the standby server? [N]: y
Your input: Y
Is this correct? <Yes/No> [Yes]: yes
The following oracle database parameters will be set in the standby database pfile or spfile:
-------------------------------------------------------------------------------
SID NAME VALUE
* audit_file_dest /u01/app/oracle/admin/DBPRIMA/adump
* compatible 12.1.0.2.0
* control_files /u01/app/oracle/oradata/DBPRIMA/control01.ctl,/u01/app/oracle/fast_recovery_area/DBPRIMA/control02.ctl
* db_block_size 8192
* db_domain
* db_name DBPRIMA
* db_recovery_file_dest /u01/app/oracle/fast_recovery_area
* db_recovery_file_dest_size 10485760000
* db_unique_name DBPRIMA
* diagnostic_dest /u01/app/oracle
* dispatchers (PROTOCOL=TCP) (SERVICE=DBPRIMAXDB)
* log_archive_format %t_%s_%r.dbf
* open_cursors 300
* pga_aggregate_target 304087040
* processes 300
* remote_login_passwordfile EXCLUSIVE
* sga_target 912261120
* spfile OS default
* undo_tablespace UNDOTBS1
-------------------------------------------------------------------------------
What would you like to do:
1 - Proceed with creating the standby database
2 - Edit oracle database parameters for the standby database pfile/spfile
3 - Terminate processing
Please enter your choice [1]: 1
......
=>Create standby database template for DBPRIMA using provided answers? [Y]:
Your input: 1
Is this correct? <Yes/No> [Yes]:
-------------------------------------------------------------------------------
=>Continue with creating a standby database? (If No processing will terminate,
the saved template will be available for future use) [Y]:
Is this correct? <Yes/No> [Yes]:
>>> dbvctl will now run a pre-flight check for standby database creation. An attempt will
be made to create a standby (s)pfile using oracle standby database parameters, followed
by trying to start the standby instance. If this step fails, then please double-check
the following items before re-running dbvctl again:
1) Review the standby database parameters you have supplied and provide valid values
unless a template is used.
2) Recreate the template to provide valid values for standby database parameters if a
template is used.
>>> Running pre-flight check for standby creation, please wait... done
>>> Total database size for DBPRIMA is 1.57GB
>>> Creating standby control file... done
>>> Transferring datafiles from dbvisit1 to dbvisit2...
Transferring /u01/app/oracle/oradata/DBPRIMA/system01.dbf...
Transferring /u01/app/oracle/oradata/DBPRIMA/sysaux01.dbf... done
Transferring /u01/app/oracle/oradata/DBPRIMA/undotbs01.dbf... done
Transferring /u01/app/oracle/oradata/DBPRIMA/users01.dbf... done
>>> Restoring standby control files... done
>>> Starting standby database DBPRIMA on dbvisit2 mount... done
>>> Restoring datafiles on dbvisit2...
>>> Renaming standby redo logs and tempfiles on dbvisit2... done
>>> Performing checkpoint and archiving logs... done
>>> Finishing standby database creation... done
>>> Standby database created.
To complete creating standby database please run dbvctl on the primary server first,
then on the standby server, to ensure the standby database is in sync with the primary
database.
PID:3915
TRACE:3915_dbvctl_csd_DBPRIMA_201611291413.trc

Let’s now run dbvctl -d DDC on the primary server to send all archived logs and on the standby server to apply all archived logs.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4354)
dbvctl started on dbvisit1: Tue Nov 29 14:24:11 2016
=============================================================
>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
>>> Sending heartbeat message... skipped
Performing a log switch...
>>> Transferring Log file(s) from DBPRIMA on dbvisit1 to dbvisit2 for thread 1:
thread 1 sequence 29 (o1_mf_1_29_d3v083x6_.arc)
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:24:16 2016
=============================================================
[oracle@dbvisit1 ~]$


[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 8665)
dbvctl started on dbvisit2: Tue Nov 29 14:25:07 2016
=============================================================
>>> Sending heartbeat message... skipped
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 29 (1_29_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:25:09 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

We can check the sync status with dbvctl -d DDC -i

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -i
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4420)
dbvctl started on dbvisit1: Tue Nov 29 14:26:11 2016
=============================================================
Dbvisit Standby log gap report for DBPRIMA thread 1 at 201611291426:
-------------------------------------------------------------
Destination database on dbvisit2 is at sequence: 29.
Source database on dbvisit1 is at log sequence: 30.
Source database on dbvisit1 is at archived log sequence: 29.
Dbvisit Standby last transfer log sequence: 29.
Dbvisit Standby last transfer at: 2016-11-29 14:24:16.
Archive log gap for DBPRIMA: 0.
Transfer log gap for DBPRIMA: 0.
Standby database time lag (DAYS-HH:MI:SS): +0:01:37.
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:26:12 2016
=============================================================
[oracle@dbvisit1 ~]$

Now it’s time to import our configuration in our graphical console. For this let’s use MANAGE HOSTS
hosts

And then add hosts using the option NEW
hosts2

Fill info and click on CREATE NEW HOST
hosts3
After we add all hosts
hosts4

We can now import our standby configuration using the MANAGE CONFIGURATIONS tab.
config1

Using IMPORT
config2
config3

Now we can manage our configuration either by command line using dbvctl or by using the graphical console. In a next blog we will talk about how to do a SWITCHOVER and a FAILOVER

 

Cet article Creating standby database with Dbvisit 8 est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 5 – Generating DDL commands

Wed, 2016-11-30 11:17

From time to time it is very useful that you can generate the DDL commands for existing objects (Tables, Indexes, whole Schema …). In Oracle you can either use the dbms_metadata PL/SQL package for this or use expdp/impdp to generate the statements out of a dump file. What options do you have in PostgreSQL? Note: We’ll not look at any third party tools you could use for that, only plain PostgreSQL.

As always we’ll need some objects to test with, so here we go:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1
CREATE FUNCTION add(integer, integer) RETURNS integer
    AS 'select $1 + $2;'
    LANGUAGE SQL
    IMMUTABLE
    RETURNS NULL ON NULL INPUT;

PostgreSQL comes with a set of administration functions which can be used to query various stuff. Some are there to get the definitions for your objects.

You can get the definition of a view:

(postgres@[local]:5439) [ddl] > select pg_get_viewdef('v1'::regclass, true);
 pg_get_viewdef 
----------------
  SELECT t1.a  +
    FROM t1;
(1 row)

You can get the definition of a constraint:

(postgres@[local]:5439) [ddl] > SELECT conname
                                     , pg_get_constraintdef(r.oid, true) as definition
                                  FROM pg_constraint r
                                 WHERE r.conrelid = 't1'::regclass;
 conname |     definition      
---------+---------------------
 con1    | CHECK (a < 2000000)

You can get the definition of a function:

(postgres@[local]:5439) [ddl] > SELECT proname
     , pg_get_functiondef(a.oid)
  FROM pg_proc a
 WHERE a.proname = 'add';
 proname |                   pg_get_functiondef                    
---------+---------------------------------------------------------
 add     | CREATE OR REPLACE FUNCTION public.add(integer, integer)+
         |  RETURNS integer                                       +
         |  LANGUAGE sql                                          +
         |  IMMUTABLE STRICT                                      +
         | AS $function$select $1 + $2;$function$                 +
         | 
--OR
(postgres@[local]:5439) [ddl] > SELECT pg_get_functiondef(to_regproc('add'));
                   pg_get_functiondef                    
---------------------------------------------------------
 CREATE OR REPLACE FUNCTION public.add(integer, integer)+
  RETURNS integer                                       +
  LANGUAGE sql                                          +
  IMMUTABLE STRICT                                      +
 AS $function$select $1 + $2;$function$                 +

You can get the definition of an index:

(postgres@[local]:5439) [ddl] > select pg_get_indexdef('i1'::regclass);
            pg_get_indexdef            
---------------------------------------
 CREATE INDEX i1 ON t1 USING btree (a)
(1 row)

But surprisingly you can not get the DDL for a table. There is just no function available to do this. How can you do that without concatenating the definitions you can get out of the PostgreSQL catalog? The only option I am aware of is pg_dump:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -s -t t1 ddl | egrep -v "^--|^$"
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
CREATE TABLE t1 (
    a integer,
    b integer,
    CONSTRAINT con1 CHECK ((a < 2000000))
);
ALTER TABLE t1 OWNER TO postgres;
CREATE INDEX i1 ON t1 USING btree (a);
CREATE UNIQUE INDEX i2 ON t1 USING btree (b);

Using the “-s” (schema only) and “-t” (tables) options you get the DDL for the complete table. Not as handy as in Oracle where you can do this in sqlplus but it works and produces a result you can work with.

Of course you can always create the DDLs for your own by querying the catalog, e.g. pg_attribute which holds all the column definitions for the tables:

    Table "pg_catalog.pg_attribute"
    Column     |   Type    | Modifiers 
---------------+-----------+-----------
 attrelid      | oid       | not null
 attname       | name      | not null
 atttypid      | oid       | not null
 attstattarget | integer   | not null
 attlen        | smallint  | not null
 attnum        | smallint  | not null
 attndims      | integer   | not null
 attcacheoff   | integer   | not null
 atttypmod     | integer   | not null
 attbyval      | boolean   | not null
 attstorage    | "char"    | not null
 attalign      | "char"    | not null
 attnotnull    | boolean   | not null
 atthasdef     | boolean   | not null
 attisdropped  | boolean   | not null
 attislocal    | boolean   | not null
 attinhcount   | integer   | not null
 attcollation  | oid       | not null
 attacl        | aclitem[] | 
 attoptions    | text[]    | 
 attfdwoptions | text[]    | 
Indexes:
    "pg_attribute_relid_attnam_index" UNIQUE, btree (attrelid, attname)
    "pg_attribute_relid_attnum_index" UNIQUE, btree (attrelid, attnum)

One nasty way which which is even documented on the PostgreSQL wiki is this:

(postgres@[local]:5439) [ddl] > create extension plperlu;
CREATE EXTENSION
Time: 90.074 ms
(postgres@[local]:5439) [ddl] > \dx
                      List of installed extensions
  Name   | Version |   Schema   |              Description               
---------+---------+------------+----------------------------------------
 plperlu | 1.0     | pg_catalog | PL/PerlU untrusted procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [ddl] > CREATE OR REPLACE FUNCTION system(text) RETURNS text 
AS 'my $cmd=shift; return `cd /tmp;$cmd`;' LANGUAGE plperlu;
CREATE FUNCTION

(postgres@[local]:5439) [ddl] > select system('pg_dump -s -t t1 ddl | egrep -v "^--|^$"');
                    system                     
-----------------------------------------------
 SET statement_timeout = 0;                   +
 SET lock_timeout = 0;                        +
 SET idle_in_transaction_session_timeout = 0; +
 SET client_encoding = 'UTF8';                +
 SET standard_conforming_strings = on;        +
 SET check_function_bodies = false;           +
 SET client_min_messages = warning;           +
 SET row_security = off;                      +
 SET search_path = public, pg_catalog;        +
 SET default_tablespace = '';                 +
 SET default_with_oids = false;               +
 CREATE TABLE t1 (                            +
     a integer,                               +
     b integer,                               +
     CONSTRAINT con1 CHECK ((a < 2000000))    +
 );                                           +
 ALTER TABLE t1 OWNER TO postgres;            +
 CREATE INDEX i1 ON t1 USING btree (a);       +
 CREATE UNIQUE INDEX i2 ON t1 USING btree (b);+
 

Can be a workaround. Hope this helps…

 

Cet article Can I do it with PostgreSQL? – 5 – Generating DDL commands est apparu en premier sur Blog dbi services.

Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE

Wed, 2016-11-30 07:03

I have setup a DataGuard environment and followed the instructions from Oracle to create the Standby Redo Logs. The Standby Redo Logs have to be the same size as the Online Redo Logs. If not, the RFS process won’t attach Standby Redo Logs, and you should have at least one more of the Standby Redo Log Group as you have for your Online Redo Log Group per Thread.

For my single instance, this should be quite straight forward, and so I issued the following commands on the primary and standby.

alter database add standby logfile group 4 size 1073741824;
alter database add standby logfile group 5 size 1073741824;
alter database add standby logfile group 6 size 1073741824;
alter database add standby logfile group 7 size 1073741824;

After setting all up, I started the new cool Broker command “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” and surprisingly found, that the validation complains that I do have insufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       3                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       3                       Insufficient SRLs

After looking everything up on Primary and Standby, the number of Log Groups and the sizes looked ok. I do have 3 Online Redo Log Groups with 1G each, and I have 4 Standby Redo Log Groups with 1G each.

-- Standby

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1          0 UNUSED           1073741824
         1          3          0 UNUSED           1073741824
         1          2          0 UNUSED           1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5        552 ACTIVE     1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

-- Primary

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1        550 INACTIVE         1073741824
         1          2        551 INACTIVE         1073741824
         1          3        552 CURRENT          1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

 

The only strange thing, is that the Standby Redo Log Group 7, shows up with Thread 0, instead of Thread 1.
Did not even know, that a thread 0 exists. It always starts with 1, and in case of RAC, you might see Thread 2, 3 or more. But if you want to, you can perfectly create thread 0 without any issues. For what reasons, I don’t know.

SQL> alter database add standby logfile thread 0 group 8 size 1073741824;

Database altered.

Ok. Lets correct the Thread 0 thing, and then lets see want the “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” shows.

-- On Standby
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-OFF';
Succeeded.

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4        553 ACTIVE     1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         1          7          0 UNASSIGNED 1073741824
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-ON';
Succeeded.
		 
-- On Primary

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

And here we go. Now I have sufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       4                       Sufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       4                       Sufficient SRLs

 

Conclusion

Even on a single instance, use the thread number in your create Standby Redo Log statement.

alter database add standby logfile thread 1 group 4 size 1073741824;
alter database add standby logfile thread 1 group 5 size 1073741824;
alter database add standby logfile thread 1 group 6 size 1073741824;
alter database add standby logfile thread 1 group 7 size 1073741824;

Cheers,
William

 

Cet article Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE est apparu en premier sur Blog dbi services.

Encryption in Oracle Public Cloud

Tue, 2016-11-29 12:04

Oracle Transparent Data Encryption is available without option on the Oracle Public Cloud: Standard Edition as well as Enterprise Edition (EE, EE-HP EE-EP, ECS). More than that, the DBaaS enforces TDE for any user tablespace even when not specifying in the CREATE TABLESPACE. It you are not familiar with TDE key management (wallets) then you have probably encountered ORA-28374: typed master key not found in wallet.
Rather than another tutorial on TDE I’ll try to explain it from the errors you may encounter when simply creating a tablespace.

I have created a new pluggable database PDB2 from the command line:

SQL> create pluggable database PDB2 admin user admin identified by "admin";
Pluggable database PDB2 created.
 
SQL> alter pluggable database PDB2 open read write;
Pluggable database PDB2 altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------- --------- ----------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
7 PDB2 READ WRITE NO

I go to the PDB2 container and try to create a tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace mytablespace;
 
Error starting at line 1 in command -
create tablespace mytablespace
Error report -
ORA-28374: typed master key not found in wallet
28374. 0000 - "typed master key not found in wallet"
*Cause: You attempted to access encrypted tablespace or redo logs with
a typed master key not existing in the wallet.
*Action: Copy the correct Oracle Wallet from the instance where the tablespace
was created.

So, this message is related with TDE wallet.

encrypt_new_tablespaces

I didn’t specify any encryption clause in the CREATE TABLESPACE command but it is activated by default by the following parameter:

SQL> show parameter encrypt_new_tablespaces
 
NAME TYPE VALUE
----------------------- ------ ----------
encrypt_new_tablespaces string CLOUD_ONLY

The values can be DDL (the old behavior where encryption must be defined in the CREATE TABLESPACE statement), ALWAYS (AES128 encryption by default), or CLOUD_ONLY which is the same as ALWAYS when the instance is on the Cloud, or as DDL if the instance is on-premises. The default is CLOUD_ONLY.
This parameter has been introduced in 12.2 and has been backported to 11.2.0.4 and 12.1.0.2 with bug 21281607 that is applied on any Oracle Public Cloud DBaaS instance.

So, one solution to create our tablespace is to set encrypt_new_tablespaces to DDL but as it is recommended to encrypt all user tablespaces, let’s continue with it.

ORA-28374: typed master key not found in wallet

So the error message means that I don’t have a master key in the wallet for my newly created PDB because in multitenant each PDB has it’s own master key (but there’s only one wallet for the CDB).
The wallet is opened:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

But empty (I’m still in the PDB2 container)

SQL> select * from v$encryption_keys order by creation_time;
no rows selected

SET KEY

So the idea is to set a key:

SQL> administer key management set key identified by "Ach1z0#d";

but:

Error starting at line 1 in command -
administer key management set key identified by "Ach1z0#d"
Error report -
ORA-28417: password-based keystore is not open
28417. 0000 - "password-based keystore is not open"
*Cause: Password-based keystore was not opened.
*Action: Close the auto login keystore, if required, and open a
password-based keystore.

Ok. An error because the wallet is not opened. Let’s try to open it:

SQL> administer key management set keystore open identified by "Ach1z0#d";
 
Error starting at line 1 in command -
administer key management set keystore open identified by "Ach1z0#d"
Error report -
ORA-28354: Encryption wallet, auto login wallet, or HSM is already open
28354. 0000 - "Encryption wallet, auto login wallet, or HSM is already open"
*Cause: Encryption wallet, auto login wallet, or HSM was already opened.
*Action: None.

Actually, the wallet is opened. We have seen that the opened wallet is AUTOLOGIN:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

On DBaaS an AUTOLOGIN wallet is used to be able to have the database automatically restarted without manual intervention. Without AUTOLOGIN wallet you have to provide the password.

But AUTOLOGIN wallet is limited to use it to access the tablespaces.
When administering the wallet, we need to provide the password manually:

We need to close the AUTOLOGIN one:

SQL> administer key management set keystore close;
Key MANAGEMENT succeeded.

Now that it is closed, we can try to open it and open it with the password:

SQL> administer key management set keystore open identified by "Ach1z0#d";
 
Error starting at line : 1 in command -
administer key management set keystore open identified by "Ach1z0#d"
Error report -
ORA-28417: password-based keystore is not open
28417. 0000 - "password-based keystore is not open"
*Cause: Password-based keystore was not opened.
*Action: Close the auto login keystore, if required, and open a
password-based keystore.

Oh… it is opened AUTOLOGIN once again:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

CDB$ROOT

You need to open the wallet with password from CDB$ROOT:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> administer key management set keystore close;
Key MANAGEMENT succeeded.
 
SQL> administer key management set keystore open identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

So here is the right way to start: in CDB$ROOT close the AUTOLOGIN wallet and open it with the password.

PDB

Now ready to go further in the PDB2.


SQL> alter session set container=PDB2;
Session altered.

The wallet is now closed for the PDB:

SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- -------------- ------- ------------ ------------- ---------------- ------
FILE CLOSED UNKNOWN SINGLE UNDEFINED 7

Let’s open it manually:

SQL> administer key management set keystore open identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

We have no encryption key:

SQL> select * from v$encryption_keys order by creation_time;
no rows selected

Let’s do what we want to do from the get-go: create an encryption key for our PDB:

SQL> administer key management set key identified by "Ach1z0#d";
 
Error starting at line 1 in command -
administer key management set key identified by "Ach1z0#d"
Error report -
ORA-46631: keystore needs to be backed up
46631. 00000 - "keystore needs to be backed up"
*Cause: The keystore was not backed up. For this operation to proceed, the
keystore must be backed up.
*Action: Backup the keystore and try again.

Oh yes. Any change must be backed up. That’s easy:


SQL> administer key management set key identified by "Ach1z0#d" with backup;
Key MANAGEMENT succeeded.

Here we are. The key is there:


SQL> select * from v$encryption_keys order by creation_time;
 
KEY_ID TAG CREATION_TIME ACTIVATION_TIME CREATOR CREATOR_ID USER USER_ID KEY_USE KEYSTORE_TYPE ORIGIN BACKED_UP CREATOR_DBNAME CREATOR_DBID CREATOR_INSTANCE_NAME CREATOR_INSTANCE_NUMBER CREATOR_INSTANCE_SERIAL CREATOR_PDBNAME CREATOR_PDBID CREATOR_PDBUID CREATOR_PDBGUID ACTIVATING_DBNAME ACTIVATING_DBID ACTIVATING_INSTANCE_NAME ACTIVATING_INSTANCE_NUMBER ACTIVATING_INSTANCE_SERIAL ACTIVATING_PDBNAME ACTIVATING_PDBID ACTIVATING_PDBUID ACTIVATING_PDBGUID CON_ID
----------------------------------------------------- ---- --------------------------------------- --------------------------------------- -------- ----------- ----- -------- ----------- ------------------ ------- ---------- --------------- ------------- ---------------------- ------------------------ ------------------------ ---------------- -------------- --------------- --------------------------------- ------------------ ---------------- ------------------------- --------------------------- --------------------------- ------------------- ----------------- ------------------ --------------------------------- ------
AXP3BIrVW0+Evwfx7okZtcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.41.20.629496000 PM +00:00 28-NOV-16 08.41.20.629498000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL NO CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 7

All is perfect but the wallet is still opened with the password:

SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------- ------------ ------------- --------------- -------
FILE OPEN PASSWORD SINGLE NO 7

In order to get back to the initial state, it is sufficient to close it (from the CDB$ROOT):


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> administer key management set keystore close;
 
Error starting at line 1 in command -
administer key management set keystore close
Error report -
ORA-28389: cannot close auto login wallet
28389. 00000 - "cannot close auto login wallet"
*Cause: Auto login wallet could not be closed because it was opened with
another wallet or HSM requiring a password.
*Action: Close the wallet or HSM with a password.

Ok. The ‘close’ command needs the password as it was not opened with AUTOLOGIN one.


SQL> administer key management set keystore close identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

It is immediately automatically re-opened with the AUTOLOGIN one:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- --------------------------------------- ------- ------------ ------------- ---------------- ------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ OPEN AUTOLOGIN SINGLE NO 1

and from the CDB$ROOT I can see all of them:

SQL> select * from v$encryption_keys order by creation_time;
 
KEY_ID TAG CREATION_TIME ACTIVATION_TIME CREATOR CREATOR_ID USER USER_ID KEY_USE KEYSTORE_TYPE ORIGIN BACKED_UP CREATOR_DBNAME CREATOR_DBID CREATOR_INSTANCE_NAME CREATOR_INSTANCE_NUMBER CREATOR_INSTANCE_SERIAL CREATOR_PDBNAME CREATOR_PDBID CREATOR_PDBUID CREATOR_PDBGUID ACTIVATING_DBNAME ACTIVATING_DBID ACTIVATING_INSTANCE_NAME ACTIVATING_INSTANCE_NUMBER ACTIVATING_INSTANCE_SERIAL ACTIVATING_PDBNAME ACTIVATING_PDBID ACTIVATING_PDBUID ACTIVATING_PDBGUID CON_ID
----------------------------------------------------- ---- --------------------------------------- --------------------------------------- -------- ----------- ----- -------- ----------- ------------------ ------- ---------- --------------- ------------- ---------------------- ------------------------ ------------------------ ---------------- -------------- --------------- --------------------------------- ------------------ ---------------- ------------------------- --------------------------- --------------------------- ------------------- ----------------- ------------------ --------------------------------- ------
ATxUk1G7gU/0v3Ygk1MbZj8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA 27-NOV-16 09.02.18.050676000 PM +00:00 27-NOV-16 09.02.18.130705000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 CDB$ROOT 1 1 3D94C45E41CA19A9E05391E5E50AB8D8 CDB1 902797638 CDB1 1 4294967295 CDB$ROOT 1 1 3D94C45E41CA19A9E05391E5E50AB8D8 1
AWSs1Gr0WE86vyfWc123xccAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 27-NOV-16 09.02.18.089346000 PM +00:00 27-NOV-16 09.02.18.722365000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 PDB1 3 2687567370 424FA3D9C61927FFE053DA116A0A85F7 CDB1 902797638 CDB1 1 4294967295 PDB1 3 2687567370 424FA3D9C61927FFE053DA116A0A85F7 3
AfwqzZP/Rk+5v5WqiNK5nl0AAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.36.43.980717000 PM +00:00 28-NOV-16 08.36.43.980720000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 PDB2 5 2602763579 42636D1380072BE7E053DA116A0A8E2D CDB1 902797638 CDB1 1 4294967295 PDB2 5 2602763579 42636D1380072BE7E053DA116A0A8E2D 5
AXP3BIrVW0+Evwfx7okZtcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.41.20.629496000 PM +00:00 28-NOV-16 08.41.20.629498000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL NO CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 7

As you can see I did two attempts with the PDB2 to write this blog post. The previous keys are all in the wallet.

I check that the AUTOLOGIN is opened in PDB2:


SQL> alter session set container=PDB2;
Session altered.
 ;
SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- -------------- ------- ------------ ------------- ---------------- ------
FILE OPEN AUTOLOGIN SINGLE NO 7

And finally I can create my tablespace


SQL> create tablespace mytablespace;
Tablespace MYTABLESPACE created.

Easy, isn’t it?

If you create your PDB with the DBaaS monitor interface all is done automatically with the ‘create PDB’ button:

  • Close the AUTOLOGIN wallet (from CDB$ROOT)
  • Open the wallet with password
  • Create the pluggable database and open it
  • Open the wallet from the PDB, with password
  • Set the masterkey for the PDB
  • Close the wallet to get it opened with AUTOLOGIN
 

Cet article Encryption in Oracle Public Cloud est apparu en premier sur Blog dbi services.

Documentum story – Authentication failed for Installation Owner with the correct password

Tue, 2016-11-29 01:00

When installing a new Remote Content Server (High Availability), everything was going according to the plan until we try to login to DA using this new CS: the login using the Installation Owner (dmadmin) failed… Same result from dqMan or any other third party tools and only the iapi or idql sessions on the Content Server itself were still working because of the local trust. When something strange is happening regarding the authentication of the Installation Owner, I tink the first to do is always to verify if the dm_check_password is able to recognize your username/password:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
Enter user name: dmadmin
Enter user password:
Enter user extra #1 (not used):
Enter user extra #2 (not used):
$DOCUMENTUM/dba/dm_check_password: Result = (245) = (DM_CHKPASS_BAD_LOGIN)

 

As you can see above, this was actually not working so apparently the CS is thinking that the password isn’t the correct one… This might happen for several reasons:

 

1. Wrong permissions on the dm_check_password script

This script is part of the few scripts that need some specific permissions to be working… This is either done by the Installer if you provide the root’s password during the installation or you can run the $DOCUMENTUM/dba/dm_root_task script manually using the root account (using sudo/dzdo or asking your UNIX admin team for example). These are the permissions that are needed for this script:

[dmadmin@content_server_01 ~]$ ls -l $DOCUMENTUM/dba/dm_check_password
-rwsr-s---. 1 root dmadmin 14328 Oct 10 12:57 $DOCUMENTUM/dba/dm_check_password

 

If the permissions aren’t the right ones or if you think that this file has been corrupted somehow, then you can re-execute the dm_root_task again as root. It will ask you if you want to overwrite the current files and it will in the end set the permissions properly.

 

2. Expired password/account

If the OS password/account you are testing (Installation Owner in my case) is expired then there are several behaviors. In case the account is expired, then the dm_check_password will return a DM_CHKPASS_ACCOUNT_EXPIRED error. If it is the password that is expired, then in some OS, the dm_check_password won’t work with the bad login error shown at the beginning of this blog. This can be checked pretty easily on most OS. On a RedHat for example, it would be something like:

[dmadmin@content_server_01 ~]$ chage -l dmadmin
Last password change                                    : Oct 10, 2016
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 4294967295
Number of days of warning before password expires       : 7

 

In our case, we are setting the password/account to never expire to avoid such issues so that’s not the problem here. By the way, an expired password will also prevent you to use the crontab for example…

 

To change these parameters, you will need to have root permissions. That’s how it is done for example (press enter to just use the proposed value if that’s fine for you). The value between brackets is the current/proposed value and you can put your desired value after the colon:

[root@content_server_01 ~]$ chage dmadmin
Changing the aging information for dmadmin
Enter the new value, or press ENTER for the default

        Minimum Password Age [0]: 0
        Maximum Password Age [4294967295]: 4294967295
        Last Password Change (YYYY-MM-DD) [2016-10-10]:
        Password Expiration Warning [7]:
        Password Inactive [-1]: -1
        Account Expiration Date (YYYY-MM-DD) [-1]: -1

 

3. Wrong OS password

Of course if the OS password isn’t the correct one, then the dm_check_password will return a BAD_LOGIN… Makes sense, isn’t it? What I wanted to explain in this section is that in our case, we are always using sudo/dzdo options to change the current user to the Installation Owner and therefore we never really use the Installation Owner’s password at the OS level (only in DA, dqMan, aso…). To check if the password is correct at the OS level, you can of course start a ssh session with dmadmin directly or any other command that would require the password to be entered like a su or sudo on itself in our case:

[dmadmin@content_server_01 ~]$ su - dmadmin
Password:
[dmadmin@content_server_01 ~]$

 

As you can see, the OS isn’t complaining and therefore this is working properly: the OS password is the correct one.

 

4. Wrong mount options

Finally the last thing that can prevent you to login to your docbases remotely is some wrong options on your mount points… For this paragraph, I will suppose that $DOCUMENTUM has been installed on a mount point /app. So let’s check the current mount options, as root of course:

[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev,nosuid)
[root@content_server_01 ~]$
[root@content_server_01 ~]$ cat /etc/fstab | grep "/app"
/dev/mapper/VG01-LV00 /app                ext4        nodev,nosuid        1 2

 

That seems alright but actually it isn’t… For Documentum to work properly, the nosuid shouldn’t be present on the mount point where it has been installed! Therefore we need to change this. First of all, you need to update the file /etc/fstab so this change will remain after a reboot of the linux host. Just remove “,nosuid” to have something like that:

[root@content_server_01 ~]$ cat /etc/fstab | grep "/app"
/dev/mapper/VG01-LV00 /app                ext4        nodev               1 2

 

Now, the configuration inside the file /etc/fstab isn’t applied or reloaded if /app is already mounted. Therefore you need to remount it with the right options and that’s how you can do it:

[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev,nosuid)
[root@content_server_01 ~]$
[root@content_server_01 ~]$ mount -o remount,nodev /app
[root@content_server_01 ~]$
[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev)

 

Now that the mount options are correct, we can check again the dm_check_password:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
Enter user name: dmadmin
Enter user password:
Enter user extra #1 (not used):
Enter user extra #2 (not used):
$DOCUMENTUM/dba/dm_check_password: Result = (0) = (DM_EXT_APP_SUCCESS)

 

As you can see, this is now working… That’s a miracle ;).

 

Cet article Documentum story – Authentication failed for Installation Owner with the correct password est apparu en premier sur Blog dbi services.

Pages