Feed aggregator

Reminder: Fusion Middleware Proactive Patches are Certified with EBS

Steven Chan - Tue, 2017-08-01 17:43

The Fusion Middleware team recently released the following updates:

  • Oracle Identity Management Suite BP 11.1.2.3.170718 consisting of
    • Oracle Access Manager BP 11.1.2.3.170718
    • Oracle Unified Directory  BP 11.1.2.3.170718
    • Oracle Internet Directory BP 11.1.1.9.170626
    • Oracle SOA Suite BP 12.1.3.0.170718 and 11.1.1.9.170629
    • Oracle Traffic Director SPU 11.1.1.9.0, 11.1.1.7.0
    • Oracle WebCenter Portal BP 11.1.1.9.170718

These updates are documented here:

The Fusion Middleware team calls these Proactive Patches, consistent with their advice that you apply these patches proactively before you inconveniently run into the fixed issues in your production environments.  These are sometimes also called Bundle Patches.  

All of the FMW components listed above have been previously-certified with various E-Business Suite releases (see the Related Articles links below).  

Our EBS certifications with these components go up to the fourth digit explicitly. These certifications cover everything after the fourth digit.  For example, our certification with Oracle Internet Directory 11.1.1.9 applies to:

  • Oracle Internet Directory 11.1.1.9
  • Oracle Internet Directory 11.1.1.9.170626 (the July 2017 update)
  • Oracle Internet Directory 11.1.1.9.170327 (the April 2017 update)

In other words, you do not need to wait for a certification announcement before applying these fifth-level FMW Proactive Patches to your EBS environments.

Related Articles

 

Categories: APPS Blogs

What You Should Know About Docker Containers for Oracle Data Integrator

Pythian Group - Tue, 2017-08-01 14:12

Not long ago, Oracle adopted the Docker engine as one of the accepted platforms for its products and published a set of scripts and examples in the Oracle GitHub repository. This includes sets for rapid deployment of Oracle databases, Weblogic and for a number of other products. I tried some of the published docker implementations including Oracle database using my sandbox and it worked pretty well for me. Among the published containers, I didn’t find an example for Oracle Data Integrator (ODI) and decided to make one just for myself. I created it and found it to be useful when you need just to test one or another ODI scenario and wipe everything out after the tests.

I’ve decided to share my scripts with you and describe how it had been made. Just before publishing the post, I checked the Oracle git repository for docker and found a new “OracleDataIntegrator” template there. So, now we have the proper version from Oracle but even having the “official” deployment, you may find my version useful and I hope it may help you to understand the Docker engine.

To build my version, I used the Oracle developed images for Oracle Linux, Oracle Java, and Oracle Database as a basis for ODI making only minor modifications to have the final product better suited to my needs. You may find that other solutions are better and perhaps more adapted for your needs, but please keep in mind that my example is purely educational and not designed for any production use.

First we need the Docker engine on your machine. Depending on your platform, you need to download an appropriate engine from the Docker site and install it. The installation is pretty straightforward and easy. If you are on Linux you may need to add your user to the “docker” group. For example, I am adding user oracle to the group to be able to run docker containers and have access to the docker registry from the Oracle account:

[root@vm129-132 ~]# gpasswd -a oracle docker
Adding user oracle to group docker
[root@vm129-132 ~]# 

And make sure the docker service is up and running:

[root@vm129-132 ~]# systemctl status  docker.service
? docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/docker.service.d
           ??docker-sysconfig.conf
   Active: active (running) since Tue 2017-06-06 12:29:35 EDT; 1 months 24 days ago

Then you need to clone or download the Oracle git repository for docker containers to be able to build necessary base images before starting with the ODI. On your system with docker, you either run “git clone https://github.com/oracle/docker-images.git” or go to the https://github.com/oracle/docker-images in your browser, download and unzip the full archive. There you will be able to find scripts to build docker images with all the different sets of software. Here is my listing after cloning it from the Oracle git repository:

[oracle@vm129-132 docker-images]$ ls -l
total 68
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 ContainerCloud
drwxr-xr-x. 6 oracle oinstall  4096 May 11 16:42 GlassFish
drwx------. 2 root   root     16384 May 11 16:54 lost+found
drwxr-xr-x. 2 oracle oinstall  4096 May 11 16:42 MySQL
drwxr-xr-x. 7 oracle oinstall  4096 May 11 16:42 NoSQL
drwxr-xr-x. 5 oracle oinstall  4096 May 11 16:42 OpenJDK
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 OracleCoherence
drwxr-xr-x. 5 oracle oinstall  4096 Jul 18 14:40 OracleDatabase
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 OracleHTTPServer
drwxr-xr-x. 6 oracle oinstall  4096 May 11 16:42 OracleJava
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 OracleTSAM
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 OracleTuxedo
drwxr-xr-x. 5 oracle oinstall  4096 May 11 16:42 OracleWebLogic
-rw-r--r--. 1 oracle oinstall  1588 Jul 17 09:10 README.md

(The listing represents the state as of May 2017 and may look different now)

Let’s continue and go step by step preparing our images. To understand the process, let me recall what we need for ODI in standalone mode. We need a Linux box with installed JDK or JRE and Oracle database as a repository. Also, if we plan to use the ODI Studio, it makes sense to have either X window or a VNC server installed on the box. We start with building a Linux image with Java JRE or JDK for our ODI. Oracle provides “OracleJava” docker configuration where we can build an image with Java 7 or 8:

[oracle@vm129-132 docker-images]$ ll OracleJava
total 20
drwxr-xr-x. 2 oracle oinstall 4096 May 11 16:42 java-7
drwxr-xr-x. 2 oracle oinstall 4096 May 12 10:56 java-8
-rw-r--r--. 1 oracle oinstall 1886 May 11 16:42 README.md
drwxr-xr-x. 4 oracle oinstall 4096 May 11 16:42 windows-java-7
drwxr-xr-x. 4 oracle oinstall 4096 May 11 16:42 windows-java-8

I used Oracle JDK 8 instead of a server JRE distribution. To make that happen, I’ve slightly modified the Dockerfile in the “OracleJava/java-8” directory replacing the server JRE distribution by JDK. It is optional and you may choose to keep JRE instead. In my case, the original string in the file was replaced from:

ENV JAVA_PKG=server-jre-8u*-linux-x64.tar.gz

to:

ENV JAVA_PKG=jdk-8u*-linux-x64.tar.gz

After that, I downloaded the JDK from the Oracle OTN site, put it in the folder and ran the build.sh script. The script prepares an image with Oracle Linux 7 in minimal configuration with the Oracle JDK 8 installed.

[oracle@vm129-132 java-8]$ ll
total 181204
-rwxr-xr-x. 1 oracle oinstall        47 May 11 16:42 build.sh
-rw-r--r--. 1 oracle oinstall       644 May 11 16:42 Dockerfile
-rw-r--r--. 1 oracle oinstall 185540433 May 12 10:43 jdk-8u131-linux-x64.tar.gz
-rw-r--r--. 1 oracle oinstall       263 May 11 16:42 server-jre-8u101-linux-x64.tar.gz.download
[oracle@vm129-132 java-8]$ cp Dockerfile Dockerfile.orig
[oracle@vm129-132 java-8]$ vi Dockerfile
[oracle@vm129-132 java-8]$ ./build.sh 
Sending build context to Docker daemon 185.5 MB
Step 1 : FROM oraclelinux:7-slim
7-slim: Pulling from library/oraclelinux
...............
Successfully built 381e0684cea2
[oracle@vm129-132 java-8]$ 
[oracle@vm129-132 java-8]$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
oracle/serverjre      8                   381e0684cea2        10 weeks ago        490.6 MB
oraclelinux           7-slim              442ebf722584        3 months ago        114.4 MB

The next step is optional, but it makes life a bit easier. We need a number of tools to be installed on top of our minimal installation. In Docker, you can create a container, modify it and save as another image using command “commit”. The beauty of this that you are not really doubling your space consumption by those incremental images. Docker will just add those extra changes you’ve made as a separate volume behind the scenes and will use original image plus your changes for the newly created image. You may think about it as a set of “snapshots”. So, I created a container with name “orajrevnc” from the “oracle/serverjre:8” image, installed additional packages including some diagnostic, configuration and other useful packages like a VNC server, vi editor, and others.

[oracle@vm129-132 java-8]$ docker run --name serverjrevnc -p 5901:5901 -ti oracle/serverjre:8
bash-4.2# 
bash-4.2# yum -y install vim
bash-4.2# yum -y install net-tools
bash-4.2# yum -y install telnet
bash-4.2# yum -y install strace
bash-4.2# yum -y install gcc
bash-4.2# yum -y install xterm
.....

After that, I used the container to commit it as a new image and saved it as “oracle/serverjrevnc:8”.

[oracle@vm129-132 java-8]$ docker commit serverjrevnc oracle/serverjrevnc:8
sha256:ac5b4d85fccc5427c92e65a6c3b1c06e3e8d04ffbe7725bcca1759a2165353d7
[oracle@vm129-132 java-8]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
oracle/serverjrevnc 8 ac5b4d85fccc 3 minutes ago 1.661 GB
oracle/serverjre 8 381e0684cea2 34 minutes ago 490.6 MB
oraclelinux 7-slim 442ebf722584 3 weeks ago 114.4 MB
[oracle@vm129-132 java-8]$

The container “orajrevnc” can be deleted now :

[oracle@vm129-132 java-8]$ docker stop orajrevnc
[oracle@vm129-132 java-8]$ docker rm orajrevnc

Now we have a basic Oracle Linux image with the java installed and all necessary tools and utilities we need. The image can be used for the next step as a basis for our Oracle Database image. The database will serve as a repository for our ODI. We go to the folder “docker-images/OracleDatabase/dockerfiles/ ” where we replace a line “FROM oraclelinux:7-slim” by “FROM oracle/serverjrevnc:8”, download the 12.2.0.1 EE database software from “Oracle site” and build the image for Oracle Database 12.2.0.1.

[oracle@vm129-132 java-8]$ cd ../../OracleDatabase/dockerfiles/
[oracle@vm129-132 dockerfiles]$ ll
total 16
drwxr-xr-x. 2 oracle oinstall 4096 May 11 16:42 11.2.0.2
drwxr-xr-x. 2 oracle oinstall 4096 Jul 18 14:18 12.1.0.2
drwxr-xr-x. 2 oracle oinstall 4096 Jul 25 14:37 12.2.0.1
-rwxr-xr-x. 1 oracle oinstall 3975 May 11 16:42 buildDockerImage.sh
[oracle@vm129-132 dockerfiles]$ vi 12.2.0.1/Dockerfile.ee
[oracle@vm129-132 dockerfiles]$ ll 12.2.0.1/*.zip
total 3372836
-rw-r--r--. 1 oracle oinstall 3453696911 May 12 09:26 linuxx64_12201_database.zip

[oracle@vm129-132 dockerfiles]$ ./buildDockerImage.sh -v 12.2.0.1 -e 
Checking if required packages are present and valid...
......

  Build completed in 906 seconds.
  
[oracle@vm129-132 dockerfiles]$

Here is the new list of images we have after building the Oracle Database image.

[oracle@vm129-132 dockerfiles]$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
oracle/database       12.2.0.1-ee         91aaea30a651        28 minutes ago      16.18 GB
oracle/serverjrevnc   8                   2d7be7d163dc        50 minutes ago      1.543 GB
oracle/serverjre      8                   c2c247029798        About an hour ago   490.5 MB
oraclelinux           7-slim              08a01cc7be97        5 weeks ago         114.4 MB

Having all necessary images prepared we can start building our ODI. I have prepared the scripts to build the Docker image and published them on “https://github.com/gotochkin/docker-images”. You can either download or clone them using the git software. Let’s have a look at the scripts and discuss what they do:

[oracle@vm129-132 ~]$ ls -l docker-images/ODI/dockerfiles/
total 8
drwxr-xr-x. 2 oracle oinstall 4096 Jul 27 14:18 12.2.1.2.6
-rwxr-xr-x. 1 oracle oinstall 3195 Jul 27 11:54 buildDockerImage.sh
[oracle@vm129-132 ~]$ ls -l docker-images/ODI/dockerfiles/12.2.1.2.6/
total 2390372
-rw-r--r--. 1 oracle oinstall        277 May  9 14:42 Checksum.standalone
-rw-r--r--. 1 oracle oinstall       8572 Jul 17 12:43 createOdiDomainForStandaloneAgent.py
-rw-r--r--. 1 oracle oinstall       4270 Jul 26 10:41 Dockerfile.standalone
-rw-r--r--. 1 oracle oinstall       1289 May  9 14:49 install.file
-rw-r--r--. 1 oracle oinstall       6477 Jul 14 13:03 oracledi.gz
-rw-r--r--. 1 oracle oinstall         55 May  8 13:57 oraInst.loc
-rw-rw-r--. 1 oracle oinstall       2695 May 15 15:08 rcuResponseFile.properties
-rw-r--r--. 1 oracle oinstall       7920 Jul 27 14:18 README.md
-rwxr-xr-x. 1 oracle oinstall       5332 Jul 14 15:51 runOracle.sh
-rwxr-xr-x. 1 oracle oinstall       4406 Jul 27 11:08 startAgent.sh
[oracle@vm129-132 ~]$ 

— buildDockerImage.sh – A script to build the ODI image. It takes parameter -v as the version for the ODI (so far only 12.2.1.2.6) and -t to tell that we are going to configure ODI Agent in standalone mode.
— Checksum.standalone – to verify checksum for the downloaded installation files which you will need to put into the docker-images/ODI/dockerfiles/12.2.1.2.6 directory
— createOdiDomainForStandaloneAgent.py – A python script to create a domain for an ODI standalone agent. The original script had been taken from the “Oracle tutorial” and slightly modified for our needs.
— Dockerfile.standalone – The instructions/script for Docker how to build the image for the ODI standalone agent.
— install.file – A response file for ODI silent installation.
— oracledi.gz – Gzipped files with ODI connection properties, it will be uncompressed into the $HOME/.odi directory.
— oraInst.loc – The configuration for Oracle Inventory.
— rcuResponseFile.properties – A parameter file for Repository Creation Utility.
— README.md – An instruction how to build an image.
— runOracle.sh – A startup script for the Oracle database which is going to be used as a repository.
— startAgent.sh – A script to configure and start the ODI agent.

We need to download ODI installation files fmw_12.2.1.2.6_odi_Disk1_1of2.zip and fmw_12.2.1.2.6_odi_Disk1_2of2.zip files from “Oracle OTN site” and put them to the docker-images/ODI/dockerfiles/12.2.1.2.6 folder.

At last, everything is ready and we can build the image using the buildDockerImage.sh script. Of course, you can build it without the script since it is only a wrapper for “docker build” command. The script just makes it bit easier.

[oracle@vm129-132 dockerfiles]$ ./buildDockerImage.sh -v 12.2.1.2.6 -t

..............

Successfully built 239bdf178fbe

  ODI Docker Image for 'standalone' version 12.2.1.2.6 is ready to be extended: 
    
    --> oracle/odi:12.2.1.2.6-standalone

  Build completed in 1032 seconds.

We can see the built image on the list:

[oracle@vm129-132 dockerfiles]$ docker images
REPOSITORY            TAG                     IMAGE ID            CREATED             SIZE
oracle/odi            12.2.1.2.6-standalone   239bdf178fbe        24 seconds ago      23.73 GB
oracle/database       12.2.0.1-ee             91aaea30a651        About an hour ago   16.18 GB
oracle/serverjrevnc   8                       2d7be7d163dc        About an hour ago   1.543 GB
oracle/serverjre      8                       c2c247029798        About an hour ago   490.5 MB
oraclelinux           7-slim                  08a01cc7be97        5 weeks ago         114.4 MB

We are ready to create our container with ODI. When we create the container it will do several steps which can be in general listed as:
— Create a container with Oracle Linux 7 with JDK and other supplemental packages.
— Create or start an Oracle database.
— Create an ODI repository, if it is not created already.
— Configure an ODI agent in the repository or adjust hostname for the agent if it has been already configured in the repository.
— Create a Weblogic domain for standalone ODI agent.
— Start the agent.

We have an option to create a fresh repository database every time when we deploy a new container using the command:

[oracle@vm129-132 dockerfiles]$ docker run --name oditest -p 1521:1521 -p 5500:5500 -p 5901:5901 -p 5902:5902 --env ORACLE_BASE=/opt/oracle --env ORACLE_HOME=/opt/oracle/product/12.2.0.1/dbhome_1 oracle/odi:12.2.1.2.6-standalone

And the database will be created inside docker file system (docker FS). It is convenient when you want to have a fresh repository every time but it takes a time to create a new database and, as result, your deployment will be delayed.

Or we can define a volume for database files out of the docker file system in which case you may reuse database for your containers again and again which can save some time during deployment.

[oracle@vm129-132 dockerfiles]$ docker run --name oditest -p 1521:1521 -p 5500:5500 -p 5901:5901 -p 5902:5902 -v /home/oracle/docker-images/OracleDatabase/oradata:/opt/oracle/oradata --env ORACLE_BASE=/opt/oracle --env ORACLE_HOME=/opt/oracle/product/12.2.0.1/dbhome_1 oracle/odi:12.2.1.2.6-standalone

Just be aware that if you want to use more than one container with the same Oracle database the scripts should be adapted. In the current implementation a new deployment will try to use the same repository.
After executing the command you will see the log of creation and in the end, you get the container with a running standalone ODI agent.

[oracle@vm129-132 ~]$ docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS                                                                                        NAMES
71969452f5b3        oracle/odi:12.2.1.2.6-standalone   "/u01/app/oracle/star"   4 days ago          Up 2 days           0.0.0.0:1521->1521/tcp, 0.0.0.0:5500->5500/tcp, 0.0.0.0:5901-5902->5901-5902/tcp, 5903/tcp   oditest
[oracle@vm129-132 ~]$ 

Inside the container you can start a vnc server and run an ODI studio.

[oracle@vm129-132 ~]$ docker exec -ti oditest bash
[oracle@71969452f5b3 Middleware]$ vncserver

You will require a password to access your desktops.

Password:

After starting the ODI studio you get the usual questions like whether you want to import any settings or to allow send data to Oracle about ODI studio usage. You don’t have anything to import from the previous installation since this is the first one, so, you can ignore and say “no”. The studio is eventually up and you need to connect to the repository. When you push the button to connect to the repository, you will be asked if you want to store the credentials in a wallet.

You have an option to refuse and use the pre-created connection and saved credentials.

Ideally this will make the usage bit easier and convenient. Of course, for any kind of production development it is strongly recommended to use a wallet and a proper password.
If you’re someone making your first steps in Docker, I hope this article has been helpful. In my opinion, Docker can be extremely useful for test deployments. The persistent database files makes the deployment easy and quick. I have some reservations about using Docker for any production deployments of Oracle databases, but that discussion deserves a dedicated post. Stay tuned.

Categories: DBA Blogs

Display Data Guard configuration in SQL Developer

Yann Neuhaus - Tue, 2017-08-01 13:34

The latest version of SQL Developer, the 17.2 one released after Q2 of 2017, has a new item in the DBA view showing the Data Guard configuration. This is the occasion to show how you can cascade the log shipping in Oracle 12c

A quick note about this new versioning: this is the release for 2017 Q2 and the version number has more digits to mention the exact build time. Here this version is labeled 17.2.0.188.1159 and we can see when it has been built:

SQL> select to_date('17.x.0.188.1159','rr."x.0".ddd.hh24mi') build_time from dual;
 
BUILD_TIME
--------------------
07-JUL-2017 11:59:00

Non-Cascading Standby

Here is my configuration with two standby databases:

DGMGRL> show configuration
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 9 seconds ago)

I have only the LogXptMode defined here, without any RedoRoutes

DGMGRL> show database orcla LogXptMode
LogXptMode = 'SYNC'

with this configuration, the broker has set the following log destination on orcla, orclb and orclc:

INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLA log_archive_dest_1 location=USE_DB_RECOVERY_FILE_DEST, valid_for=(ALL_LOGFILES, ALL_ROLES)
ORCLA log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300
db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
ORCLA log_archive_dest_3 service="ORCLC", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300
db_unique_name="orclc" net_timeout=30, valid_for=(online_logfile,all_roles)
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLB log_archive_dest_1 location=/u01/fast_recovery_area
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLC log_archive_dest_1 location=/u01/fast_recovery_area

In the latest SQL Developer you have the graphical representation of it from the DBA view / Dataguard / console:

SDDG001

Cascading Standby

In 12c we can define cascading standby: instead of the primary shipping the redo to all standby databases, you can have the primary shipping to one standby only, and this one can forward the redo to another one. You define that with the RedoRoute property:


DGMGRL> edit database orcla set property redoroutes = '(local:orclb) (orclb:orclc async)';
Property "redoroutes" updated
DGMGRL> edit database orclb set property redoroutes = '(orcla:orclc async) (local:orcla)';
Property "redoroutes" updated

The first route defined in each property is applied when orcla is the primary database:

  • on orcla (local:orclb) means that orcla sends redo to orclb when primary
  • on orclb (orcla:orclc async) means that orclb sends redo to orclc when orcla is primary. LogXptMode is SYNC but overriden here with ASYNC

The second route defined in each property is applied when orclb is the primary database:

  • on orcla (orclb:orclc async) means that orclb sends redo to orclc when orclb is primary. LogXptMode is SYNC but overriden here with ASYNC
  • on orclb (local:orcla) means that orclb sends redo to orcla when primary

With this configuration, and orcla still being the primary, the broker has set the following log destination on orcla, orclb and orclc:


INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLA log_archive_dest_1 location=USE_DB_RECOVERY_FILE_DEST, valid_for=(ALL_LOGFILES, ALL_ROLES)
ORCLA log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300
db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLB log_archive_dest_1 location=/u01/fast_recovery_area
ORCLB log_archive_dest_2 service="ORCLC", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=3
00 db_unique_name="orclc" net_timeout=30, valid_for=(standby_logfile,all_roles)
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLC log_archive_dest_1 location=/u01/fast_recovery_area

The show configuration from DGMGRL displays them indented to see the cascading redo shipping:

DGMGRL> show configuration
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database (receiving current redo)
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 27 seconds ago)

And SQL Developer Data Guard console shows:
SDDG002

Switchover

Now the goal of defining several routes is to have all log destination automatically changed when the database role change.
I’m doing a switchover:


Connected to "orclb"
Connected as SYSDG.
DGMGRL> switchover to orclb;
Performing switchover NOW, please wait...
New primary database "orclb" is opening...
Operation requires start up of instance "ORCLA" on database "orcla"
Starting instance "ORCLA"...
ORACLE instance started.
Database mounted.
Database opened.
Connected to "orcla"
Switchover succeeded, new primary is "orclb"

Now it is orcla which cascades the orclb redo to orclc:

DGMGRL> show configuration;
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orclb - Primary database
orcla - Physical standby database
orclc - Physical standby database (receiving current redo)
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 74 seconds ago)

Here is how it is displayed from SQL Developer:

SDDG003

We have seen how the configuration is displayed from DGMGRL and graphically from SQL Developer. Of course, you can also query the Data Guard configuration:

SQL> select * from V$DATAGUARD_CONFIG;
 
DB_UNIQUE_NAME PARENT_DBUN DEST_ROLE CURRENT_SCN CON_ID
-------------- ----------- --------- ----------- ------
orcla orclb PHYSICAL STANDBY 3407900 0
orclc orcla PHYSICAL STANDBY 3408303 0
orclb NONE PRIMARY DATABASE 0 0

and the broker configuration:

SQL> select * from V$DG_BROKER_CONFIG;
 
DATABASE CONNECT_IDENTIFIER DATAGUARD_ROLE REDO_SOURCE ENABLED STATUS VERSION CON_ID
-------- ------------------ -------------- ----------- ------- ------ ------- ------
orcla ORCLA PHYSICAL STANDBY -UNKNOWN- TRUE 0 11.0 0
orclb ORCLB PRIMARY -N/A- TRUE 0 11.0 0
orclc ORCLC PHYSICAL STANDBY orcla TRUE 0 11.0 0

This another reason to use the broker. Once the configuration is setup and tested, you have nothing else to think about when you do a switchover. The log archive destination is automatically updated depending on the database roles.

 

Cet article Display Data Guard configuration in SQL Developer est apparu en premier sur Blog dbi services.

Oracle Hospitality Introduces New Data Science Cloud Services to Help Food & Beverage Operators Optimize Every Sale

Oracle Press Releases - Tue, 2017-08-01 11:01
Press Release
Oracle Hospitality Introduces New Data Science Cloud Services to Help Food & Beverage Operators Optimize Every Sale Expert Analysis and Machine Learning Enable Improved Menu Optimization and Greater Cost and Inventory Control

Redwood Shores, Calif.—Aug 1, 2017

Empowering food and beverage operators to convert data into profit, Oracle Hospitality today announced Data Science Cloud Services. With the new services, food and beverage operators gain the ability to analyze key information such as sales, guest, marketing and staff performance data at unprecedented speed – generating insights that lead directly to actionable measures that improve their top and bottom lines.

The suite includes two cloud-driven offerings – Oracle Hospitality Menu Recommendations Cloud Service and Oracle Hospitality Adaptive Forecasts Cloud Service ­­– currently available to operators worldwide, enabling them to improve up-sell and cross-sell opportunities, and optimize operations, respectively.

The new Data Science Cloud Services bring Oracle’s renowned machine learning and data-analytics expertise specifically to the food and beverage industry. This, combined with years of hospitality industry knowledge, delivers quick wins for operators, while saving them the significant expense of having to hire their own analysts and invest in a data processing infrastructure. In addition to Oracle technology, Data Science delivers the support of a team of leading data scientists, database engineers and experienced hospitality consultants. 

“Margins are being squeezed in hospitality like never before,” said Mike Webster, Senior Vice President and General Manager, Oracle Hospitality. “Labor and food costs are increasing, and competition for the dining dollar is high. With our Data Science Cloud Services, we are giving our customers the ability to be as profitable as possible, by helping them pinpoint cost-savings in each location while optimizing every single sales opportunity to deliver revenue growth.”

Making Every Sale Count with Oracle Hospitality Menu Recommendations Cloud Service

Oracle Hospitality Menu Recommendations Cloud Service allows food and beverage operators with multiple locations to evaluate their menus and identify enhancements to maximize every sales opportunity. The Data Science service can seek the best possible up-sell or cross-sell options by location or time of day, with recommendations dynamically updating based on customer behavior. Assumptions around cross-sells and up-sells can be analyzed, leading to better understanding of guest behavior and preferences.  

Speed to value is accelerated, thanks to integration between the Data Science service and the Oracle Hospitality technology platform. Recommendations are available at point-of-service terminals and displayed as localized cross-sells or timed up-sells. Such simplicity enables staff to optimize sales and serve guests without delay or confusion.

Predicting Stock and Labor Needs with Oracle Hospitality Adaptive Forecasts Cloud Service

Oracle Hospitality Adaptive Forecasts lets operators better predict stock and labor needs at every location. The service creates a single forecast by item, location and day part, and factors in weather, events, time of day, day of the week and Net Promoter scores. Such forecasting maintains appropriate levels of inventory and staffing in all business scenarios, helping store managers minimize wasted inventory, lower labor costs and, most importantly, ensure an exceptional guest experience.

For Oracle Hospitality customers, these Advanced Science Cloud Services complement the self-service data access and reporting solutions that are already available, including the InMotion mobile app that provides real-time access to restaurant KPIs and the Reporting and Analytics 9.0 service that was launched in April 2017.

About Oracle Hospitality

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility.

For more information about Oracle Hospitality, please visit www.Oracle.com/Hospitality

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Postgres vs. Oracle access paths I – Seq Scan

Yann Neuhaus - Tue, 2017-08-01 10:58

Here is the first test I’ve done for my Postgres vs. Oracle access paths series and the first query did a sequential scan. It illustrates the first constant you find in the documentation for the query planner:
seq_page_cost (floating point)
Sets the planner’s estimate of the cost of a disk page fetch that is part of a series of sequential fetches. The default is 1.0.

Table creation

I start by creating a very simple table with 10000 rows and 3 columns. The first column(n) is indexed:

create table demo1 as select generate_series n , 1 a , lpad('x',1000,'x') x from generate_series(1,10000);
SELECT 10000
create unique index demo1_n on demo1(n);
CREATE INDEX
 
analyze verbose demo1;
INFO: analyzing "public.demo1"
INFO: "demo1": scanned 1429 of 1429 pages, containing 10000 live rows and 0 dead rows; 10000 rows in sample, 10000 estimated total rows
ANALYZE
select relkind,relname,reltuples,relpages from pg_class where relname='demo1';
relkind | relname | reltuples | relpages
---------+---------+-----------+----------
r | demo1 | 10000 | 1429
 
select relkind,relname,reltuples,relpages from pg_class where relname='demo1_n';
relkind | relname | reltuples | relpages
---------+---------+-----------+----------
i | demo1_n | 10000 | 30

I checked the table and index statistics that will be used by the optimizer: 10000 rows, all indexed, 1429 table blocks and 30 index blocks. Note that blocks are called pages, but that’s the same idea: the minimal size read and written to disk. They are also called buffers as they are read into a buffer and cached in the buffer cache.

Here is how I create a similar table in Oracle:

create table demo1 as select rownum n , 1 a , lpad('x',1000,'x') x from xmltable('1 to 10000');
Table created.
create unique index demo1_n on demo1(n);
Index created.
exec dbms_stats.gather_table_stats(user,'demo1');
PL/SQL procedure successfully completed.
 
select table_name,num_rows,blocks from user_tables where table_name='DEMO1';
 
TABLE_NAME NUM_ROWS BLOCKS
---------- ---------- ----------
DEMO1 10000 1461
 
select index_name,num_rows,leaf_blocks,blevel from user_indexes where table_name='DEMO1';
 
INDEX_NAME NUM_ROWS LEAF_BLOCKS BLEVEL
---------- ---------- ----------- ----------
DEMO1_N 10000 20 1

The same rows are stored in 1421 table blocks and the index entries in 20 blocks. Both use 8k blocks, but different storage layout and different defaults. This is about 7 rows per table blocks, for rows that are approximately larger than 1k and about 500 index entries per index block to store the number for column N plus the pointer to table row (a few bytes called TID in Postgres or ROWID for Oracle). I’ll not get into the details of the number here. More about the row storage:

My goal is to detail the execution plans and the execution statistics.

Postgres Seq Scan

I start with a very simple query on my table: SELECT SUM(N) from DEMO1;


explain (analyze,verbose,costs,buffers) select sum(n) from demo1 ;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1554.00..1554.01 rows=1 width=8) (actual time=4.616..4.616 rows=1 loops=1)
Output: sum(n)
Buffers: shared hit=1429
-> Seq Scan on public.demo1 (cost=0.00..1529.00 rows=10000 width=4) (actual time=0.011..3.614 rows=10000 loops=1)
Output: n, a, x
Buffers: shared hit=1429
Planning time: 0.468 ms
Execution time: 4.661 ms

This query does a sequential scan (Seq Scan), which is the equivalent of Oracle Full Table Scan: read all rows from the table. You might tell me that it would be cheaper to scan the index because the index I’ve created holds all required columns. We will see that in the next post. Here, after having created the table as I did above, the query planner prefers to scan the table.

Here are the maths: my table has 1429 pages and each page access during a sequential scan has cost=1 as defined by:

show seq_page_cost;
seq_page_cost
---------------
1

Here, I see a cost estimated from 0 to 1529 for the Seq Scan operation.
The first number, 0.00 is the initialization cost estimating the work done before returning any rows. A Seq Scan has nothing to do before, and reading the first block can already return rows.
The second number is the cost to return all rows. We have seen that the scan itself costs 1429 but the rows (tuples) must be read and processed. This is evaluated using the following constant:

show cpu_tuple_cost;
cpu_tuple_cost
----------------
0.01

For 10000 rows, the cost to process them is 0.01*10000=100 which is an additional cost over the Seq Scan 1429 to get it to 1529. This explains cost=0.00..1529.00

Then there is a SUM operation applied to 10000 rows and there is a single parameter for the CPU cost of operators and functions:

show cpu_operator_cost;
cpu_operator_cost
-------------------
0.0025

Capturepgoraseqscan001
The sum (Aggregate) operation adds 0.0025*10000=25 to the cost and then the cost is 1554. You can see this cost in the minimal cost for the query, the first number in cost=1554.00..1554.01, which is the cost before retrieving any rows. This makes sense because before retrieving the first row we need to read (Seq Scan) and process (Aggregate) all rows, which is exactly what the cost of 1554 is.

Then there is an additional cost when we retrieve all rows. It is only one row here because it is a sum without group by, and this adds the default cpu_tuple_cost=0.01 to the initial cost: 1554.01

In summary, The total cost of the query is cost=1554.00..1554.01 and we have seen that it depends on:
– number of pages in the table
– number of rows from the result of the scan (we have no where clause here)
– number of rows summed and retrieved
– the planner parameters seq_page_cost, cpu_tuple_cost, and cpu_operator_cost

Oracle Full Table Scan

When I run the same query on Oracle, the optimizer chooses an index fast full scan rather than a table full scan because all rows and columns are in the index that I’ve created:

  • all rows because the SUM(N) do not need to get rows where N is not null (which are not stored in the index)
  • all columns because I need nothing else than the values for N

We will see that in the next post, for the moment, in order to compare with Postgres, I forced a full table scan with the FULL() hint.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID bhsjquhh6y08q, child number 0
-------------------------------------
select /*+ full(demo1) */ sum(n) from demo1
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 397 (100)| 1 |00:00:00.01 | 1449 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 1449 |
| 2 | TABLE ACCESS FULL| DEMO1 | 1 | 10000 | 397 (0)| 10000 |00:00:00.01 | 1449 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("N")[22] 2 - (rowset=256) "N"[NUMBER,22]

We have seen that Postgres cost=1 is for sequential scans (similar to what we call multiblock reads in Oracle) and random reads (single block reads) have by default cost=4 according to random_page_cost.

Oracle cost unit is based on single block reads and this is why the cost here (397) is lower than the number of blocks (1461). Different units. Postgres counts cost=1 for reads and counts a higher cost when a seek is involved. Oracle counts cost=1 for single block reads (including seek) and lower cost for larger I/O size.
Capturepgoraseqscan002
With the default system statistics, where latency is estimated 10 milliseconds and transfer is estimated to 4KB/ms. The single block read time is estimated to 12 milliseconds (10 + 8192/4096).
Again with the default system statistics where optimizer estimates 8 blocks per multiblock read, the multiblock read time is estimated to 26 milliseconds (10 + 8*8192/4096) which is on average 26/8=3.25 millisecond per block. This means that the ratio of single vs. multi block read is very similar for Oracle (3.25/12=0.27833333) and Postgres (seq_page_cost /random_page_cost=1/4=0.25) with default parameters.

Our table is stored in 1461 blocks and the full table scan involves reading all of them plus some segment header blocks. 1461*0.27833333=396

There is also the costing of CPU (the equivalent to cpu_tuple_cost) which is included here but I’ll not go into the details which are more complex than in Postgres and depends on your processor frequency. The goal of those posts is about Postgres. For Oracle, all this is explained in Jonathan Lewis and Chris Antognini books.

But basically, the idea is the same: Postgres Seq Scan and Oracle Full table Scan read the contiguous table blocks sequentially and the cost mainly depends on the size of the table (number of blocks) and the estimated time for sequential I/O (where bandwidth counts more than latency).

Buffers

In my tests, I’ve not only explained the query, but I executed it to get execution statistics. This is done with EXPLAIN ANALYZE in Postgres and DBMS_XPLAN.DISPLAY_CURSOR in Oracle. The statistics include the number of blocks read at each plan operation, with the BUFFERS option in Postgres and with STATISTICS_LEVEL=ALL in Oracle.


explain (analyze,buffers) select sum(n) from demo1 ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
Aggregate (cost=1554.00..1554.01 rows=1 width=8) (actual time=3.622..3.622 rows=1 loops=1)
Buffers: shared hit=1429
-> Seq Scan on demo1 (cost=0.00..1529.00 rows=10000 width=4) (actual time=0.008..1.724 rows=10000 loops=1)
Buffers: shared hit=1429
Planning time: 0.468 ms
Execution time: 4.661 ms

‘Buffers’ displays the number of blocks that have been read by the Seq Scan and is exactly the number of pages in my table. ‘shared hit’ means that they come from the buffer cache.

Let’s run the same when the cache is empty:

explain (analyze,verbose,costs,buffers) select sum(n) from demo1 ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1554.00..1554.01 rows=1 width=8) (actual time=13.837..13.837 rows=1 loops=1)
Output: sum(n)
Buffers: shared read=1429
-> Seq Scan on public.demo1 (cost=0.00..1529.00 rows=10000 width=4) (actual time=0.042..12.506 rows=10000 loops=1)
Output: n, a, x
Buffers: shared read=1429
Planning time: 3.754 ms
Execution time: 13.906 ms

The buffers are now ‘shared read’ instead of ‘shared hit’. In Postgres, the number of logical reads, as we know them in Oracle, is the sum of hits and reads. In Oracle, all blocks are counted as logical reads, which includes the smaller set of physical reads.

IO calls

Here is more about the reads when the block is not in the buffer cache. On Linux, we can trace the system calls to see how those sequential I/Os are implemented.

I get the ‘relfilenode':

postgres=# select relname,relnamespace,reltype,relowner,relfilenode,relpages,reltuples from pg_class where relname='demo1';
relname | relnamespace | reltype | relowner | relfilenode | relpages | reltuples
---------+--------------+---------+----------+-------------+----------+-----------
demo1 | 2200 | 42429 | 10 | 42427 | 1429 | 10000

I get the pid of my session process:

select pg_backend_pid();
-[ RECORD 1 ]--+------
pg_backend_pid | 30732

I can trace system calls:

strace -p 30732

And look at the trace concerning my file (identified with its ‘relfilenode’):

30732 open("base/12924/42427", O_RDWR) = 33
30732 lseek(33, 0, SEEK_END) = 11706368
30732 open("base/12924/42427_vm", O_RDWR) = 43
30732 lseek(33, 0, SEEK_END) = 11706368
30732 lseek(33, 0, SEEK_END) = 11706368
30732 lseek(33, 0, SEEK_SET) = 0
30732 read(33, "\4004\220\3 \4 \360\233\30\10\340\227\30\10"..., 8192) = 8192
30732 read(33, "\4004\220\3 \4 \360\233\30\10\340\227\30\10"..., 8192) = 8192
30732 read(33, "\4004\220\3 \4 \360\233\30\10\340\227\30\10"..., 8192) = 8192
... 1429 read(33) in total

We see two open() calls with the relfilenode of my table in the file name: one for the table and one for the visibility map
The file descriptor for the table file is 33 and I’ve grepped only the related calls.
The lseek(33,0,SEEK_END) goes to the end of the file (11706368 bytes, which is 11706368/8192=1429 pages.
The lseek(33,0,SEEK_SET) goes to the beginning of the file.
Subsequent read() calls read the whole file, reading page per page (8192 bytes), in sequential order.

This is how sequential reads are implemented in Postgres: one lseek() and sequential read() calls. The I/O size is always the same (8k here). The benefit of sequential scan is not larger I/O calls but simply the absence of seek() in between. The optimization is left to the underlying layers filesystem and read-ahead.

This is very different from Oracle. Not going into the details, here are the kind of system calls you see during the full table scan:

open("/u01/oradata/CDB1A/PDB/users01.dbf", O_RDWR|O_DSYNC) = 9
fcntl(9, F_SETFD, FD_CLOEXEC) = 0
fcntl(9, F_DUPFD, 256) = 258
...
pread(258, "\6\242\2\5\3\276\25%\2\4\24\270\1\313!\1x\25%"..., 1032192, 10502144) = 1032192
pread(258, "\6\242\202\5\3\300\25%\2\4\16\247\1\313!\1x\25%"..., 1032192, 11550720) = 1032192
pread(258, "\6\242\2\6\3\302\25%\2\4x\226\1\313!\1x\25%"..., 417792, 12599296) = 417792

Those are also sequential reads of contiguous blocks but done with larger I/O size (126 blocks here). So in addition to the absence of seek() calls, it is optimized to do less I/O calls, not relying on the underlying optimization at OS level.

Oracle can also trace the system calls with wait events, which gives more information about the database calls:

WAIT #140315986764280: nam='db file scattered read' ela= 584 file#=12 block#=1282 blocks=126 obj#=74187 tim=91786554974
WAIT #140315986764280: nam='db file scattered read' ela= 485 file#=12 block#=1410 blocks=126 obj#=74187 tim=91786555877
WAIT #140315986764280: nam='db file scattered read' ela= 181 file#=12 block#=1538 blocks=51 obj#=74187 tim=91786556380

The name ‘scattered’ is misleading. ‘db file scattered read’ are actually multiblock reads: read more than one block in one I/O call. Oracle does not rely on the Operating System read-ahead and this is why we can (and should) use direct I/O and Async I/O if the database buffer cache is correctly sized.

Output and Projection

I’ve run the EXPLAIN with the VERBOSE option which shows the ‘Output’ for each operation, and I’ve done the equivalent in Oracle by adding the ‘+projection’ format in DBMS_XPLAN.

In the Oracle execution plan, we see the columns remaining in the result of each operation, after the projection:

Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("N")[22] 2 - (rowset=256) "N"[NUMBER,22]

The operation 2, the Full Table Scan, reads all rows with all columns, but selects only the one we need: N

In the Postgres equivalent, it seems that the Output mentions the columns available before the projection because we see all table columns here:

explain verbose select sum(n) from demo1 ;
QUERY PLAN
-------------------------------------------------------------------------
Aggregate (cost=1554.00..1554.01 rows=1 width=8)
Output: sum(n)
-> Seq Scan on public.demo1 (cost=0.00..1529.00 rows=10000 width=4)
Output: n, a, x

I prefer to see the columns after the projection and I use it a lot in Oracle to know which columns are needed from the table. A great optimization can be done when we have a covering index where all selected columns are present so that we don’t have to go to the table. But we will see that in the next post about Index Only Scan.

 

Cet article Postgres vs. Oracle access paths I – Seq Scan est apparu en premier sur Blog dbi services.

processing associative arrays in loops

Tom Kyte - Tue, 2017-08-01 10:06
Hello Tom, how can I process an associative array in a loop? Because the index is not numeric, a "FOR i in array.First .. array.LAST" raises an exception: DECLARE TYPE string_assarrtype IS TABLE OF VARCHAR2 ( 25 ) INDEX BY VARCHAR2 ( 20...
Categories: DBA Blogs

What happens when a user cancels a query

Tom Kyte - Tue, 2017-08-01 10:06
Hi Team, I have some doubts can you pls help me clear it out - Backend db is oracle 11g Frontend is Java Q.1 When a user triggers a download report query, and he closes the window immediately does the query get aborted in the backend or i...
Categories: DBA Blogs

Partition Table : unable to create INITIAL extent for segment (ORA-01658)

Tom Kyte - Tue, 2017-08-01 10:06
Hi Team, When i create a Range Partition table and try to insert row into that, I am getting below error: SQL Error: ORA-01658: unable to create INITIAL extent for segment in tablespace 01658. 00000 - "unable to create INITIAL extent for segme...
Categories: DBA Blogs

cpu count question

Tom Kyte - Tue, 2017-08-01 10:06
I was reading </code> http://www.dba-oracle.com/oracle_tips_cpu_count_wrong.htm <code>and was wondering if this is really a bug? Is it really dangerous?
Categories: DBA Blogs

Checkpoint&logswitch -- what is their relationship.

Tom Kyte - Tue, 2017-08-01 10:06
Please Tom can you help me to understand what happens at the log switch time and what happens at checkpoint time and how one event can determine the other one. Also i am very confuse where SCN is written: in datafile header, in rolback segment ...
Categories: DBA Blogs

What Employers Want : Qualifications

Tim Hall - Tue, 2017-08-01 02:47

The companies and individuals I spoke to tended to split into 3 main groups regarding their attitudes to qualifications. Let’s call them yes, no, maybe. Remember, this question was mostly aimed at their attitudes towards graduate employees, so it doesn’t necessarily relate to their views on certifications. I’ve discussed that subject here.

Yes

Some companies care a lot about qualifications. This can be for several reasons.

  • They consider themselves the best, so they only employ the best.
  • If you don’t have the best grades it “proves” you don’t care about the subject.
  • They get so many applicants they throw away every application that doesn’t meet certain criteria, just to make their lives easier. It’s part of their HR process.

Don’t bother applying to companies like this unless your grades are top notch. You are wasting your time and theirs. No matter how outstanding you think you are, your application is going straight for the bin.

No

Some companies don’t worry so much about qualifications, but maybe for different reasons than you might expect.

  • Grades are irrelevant, it’s the person we care about.
  • Everyone has great grades, so we look for other things.
  • Grades mean nothing. We want experience.
  • What have you done? What apps you’ve built? What open source contributions have you made? Community participation?

This might sound like the easy option at first, but actually most of these companies that “don’t care about qualifications” are harder to get a job with than those that care about grades. They are looking for standout individuals. Chances are the people they are looking for happen to have amazing grades anyway, as well as being well rounded and self-motivated types. When several thousand people apply for that job, what makes you stand out?

Maybe

Most companies consider qualifications as part of the package, but it is not the deciding factor.

What Should You Do?

You should take the time to investigate the companies you are interested in. Contact their HR departments and find out their attitudes. It is incredibly lazy of you to just send an application without doing the research and expect to get an interview. Do some research to check they are the right company for you and you are the right person for them!

Check out the rest of this series here.

Cheers

Tim…

What Employers Want : Qualifications was first posted on August 1, 2017 at 8:47 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Postgres unique constraint

Yann Neuhaus - Tue, 2017-08-01 02:11

I’ll start a series on Postgres vs. Oracle access paths because I know Oracle and I learn Postgres. While preparing it, I came upon some surprises because I’m so used to Oracle that I take some behavior as granted for any SQL databases. I recently posted a tweet about one of them, comparing latest Postgres version to earliest Oracle version I have on my laptop.
The goal of the tweet was exactly what I said above: show my surprise, using Oracle 7 as a reference because this is the version where I started to learn SQL. And there’s no judgment behind this surprise: I can’t compare a software I use for more than 20 years with one I’m just learning. I have a big admiration for the Oracle design and architecture choices. But I’ve also a big admiration for what the Postgres community is doing.

In my tweet I’ve updated a primary key. I think I’ve never designed in real life a primary key that has to be updated later. For each table we need a key that is immutable to identify rows for referential integrity constraints, or for replication. The value must be known from the first insert (which means the columns are declared not null) and the value is never updated. It makes sense to use a primary key for that as it is unique and not null.

Actually, a better case would be a simple unique constraint where we just exchange two rows. A real-life example is a list of items, having probably a surrogate key as the primary key, and a unique key including an item number. When the user wants to move up one item, we just run an update on two rows, exchanging their numbers. The unique constraint just ensures that we have only distinct values so that a select … order by will always return the values in the same order.

All similar cases have the same consequence: when you process row by row the update, the uniqueness may be violated. But at the end of the statement, the constraint is still valid.

Here is the initial example with updating all rows:


create table demo as select generate_series n from generate_series(1,2);
SELECT 2
alter table demo add constraint demo_pk primary key(n);
ALTER TABLE
select * from demo;
n
---
1
2
(2 rows)
 
begin transaction;
BEGIN
update demo set n=n-1;
UPDATE 2
select * from demo;
n
---
0
1
(2 rows)

This works. I’ve inserted the rows in ascending order of n. Decreasing the value doesn’t violate the uniqueness at any time because it reads rows from the beginning to the end.

However, when we increase the value, we have a duplicate value until we process the next row. And by default, Postgres fails:

update demo set n=n+1;
ERROR: duplicate key value violates unique constraint "demo_pk"
DETAIL: Key (n)=(1) already exists.

The bad thing is that the behavior of the application depends on the physical order of the rows and the order where they are processed. This violates the Codd rule about physical independence. In addition to that, the SQL statements should behave as processing the set of rows rather than low-level row-by-row processing.

But there is also a very good thing: because the constraint is validated row by row, you know which value violates the constraint (here: “DETAIL: Key (n)=(1) already exists” ).

So my statement failed and this in Postgres seems to fail the whole transaction:

commit;
ROLLBACK

My second surprise is that the failure of one statement cancels the whole transaction. I see no error at commit, but it simply tells me that it has done a rollback instead of the commit.

deferrable

So, I compared with Oracle where this statement is always successful, because temporary violations that are resolved later, within the same statement, do not violate the constraint. I compared it with the oldest version I have on my laptop (Oracle 7.3) to show that it is something I’ve never seen as a new feature because I started with Oracle 7. And this kind of thing is the reason why I like SQL. Doing the same with a procedural language requires an intermediate update to be sure that there is no duplicate at any time.

The Postgres community is very responsive, especially when we may think that something works better in Oracle than Postgres (which was not the case here and which was not the goal of my tweet anyway – but tweets are short and may not express the tone properly).

Quickly a solutions were proposed: deferred constraint (example in this blog post).

I know deferred constraints in Oracle. They are similar in Postgres and here is the solution proposed:


alter table demo drop constraint demo_pk;
ALTER TABLE
alter table demo add constraint demo_pk primary key(n) deferrable initially deferred;
ALTER TABLE
begin transaction;
BEGIN
update demo set n=n-1;
UPDATE 2
select * from demo;
n
---
0
1
(2 rows)
 
update demo set n=n+1;
UPDATE 2

That seems good. Because the constraint validation is deferred, the update is successful.

However, this is not what I want. I want the previous statement to succeed, but I want the following statement to fail:

insert into demo values(1);
INSERT 0 1

Because constraint is deferred, this statement is successful and it is only at commit that it fails:

commit;
ERROR: duplicate key value violates unique constraint "demo_pk"
DETAIL: Key (n)=(1) already exists.

Why do I think this is not the good solution? First, because I want the statement to fail as soon as possible. And in addition to that, I want the commit to be fast. Doing expensive things at commit should be avoided, if possible. It is the point where all work is supposed to be done and you just want to save it (make it durable and visible to others).

deferrable initially immediate

Actually, the solution is to declare the constraint as deferrable, but not deferred.

alter table demo drop constraint demo_pk;
ALTER TABLE
alter table demo add constraint demo_pk primary key(n) deferrable initially immediate;
ALTER TABLE

This says that it is deferrable, but not deferred (except if you decide to set the constraint deferred for your transaction). That way it accepts temporary constraint violation if they are resolved at the end of the statement.

Now, my update statement is sucessful:

begin transaction;
BEGIN
update demo set n=n-1;
UPDATE 2
select * from demo;
n
---
0
1
(2 rows)
update demo set n=n+1;
UPDATE 2

Any other statement that violates the constraint fails immediately:

insert into demo values(1);
ERROR: duplicate key value violates unique constraint "demo_pk"
DETAIL: Key (n)=(1) already exists.
commit;
ROLLBACK

Documentation

The nice thing is that this is documented! I didn’t find it immediately because it is in the ‘Compatibility’ part of the ‘create table’ documentation. I’m not yet used to the Postgres documentation. I stopped at the ‘DEFERRED’ definition which mentions: A constraint that is not deferrable will be checked immediately after every command

But later Compatibility adds something more specific to the unique constraint:

Non-deferred Uniqueness Constraints
When a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL checks for uniqueness immediately whenever a row is inserted or modified. The SQL standard says that uniqueness should be enforced only at the end of the statement; this makes a difference when, for example, a single command updates multiple key values. To obtain standard-compliant behavior, declare the constraint as DEFERRABLE but not deferred (i.e., INITIALLY IMMEDIATE). Be aware that this can be significantly slower than immediate uniqueness checking.

That’s another good point. Postgres documentation is clear and gives the right solution. We just have to read it to the end.

A side note for my French speaking readers here to mention that the Postgres documentation has been translated into French by Guillaume Lelarge, who also translated Markus Winand book and website. Translation is as good as the original in both cases.

Performance

The documentation mentions ‘significantly slower’. Here is a test on 100000 rows with non deferable constraint:

create table demo as select generate_series n from generate_series(1,100000);
SELECT 100000
alter table demo add constraint demo_pk primary key(n);
ALTER TABLE
vacuum demo;
VACUUM
select * from pgstatindex('demo_pk');
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
2 | 1 | 2260992 | 3 | 1 | 274 | 0 | 0 | 89.83 | 0
(1 row)

Here is the update n=n-1 where all rows are updated but none violates the constraint at any time:

explain (analyze,verbose,costs,buffers)update demo set n=n-1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Update on public.demo (cost=0.00..1693.00 rows=100000 width=10) (actual time=425.699..425.699 rows=0 loops=1)
Buffers: shared hit=578646 read=1202 dirtied=1267
-> Seq Scan on public.demo (cost=0.00..1693.00 rows=100000 width=10) (actual time=0.013..16.186 rows=100000 loops=1)
Output: (n - 1), ctid
Buffers: shared hit=443

This update has read 578646+1202=579848 buffers.

Now creating the deferrable constraint:

alter table demo drop constraint demo_pk;
ALTER TABLE
alter table demo add constraint demo_pk primary key(n) deferrable initially immediate;
ALTER TABLE
vacuum demo;
VACUUM
select * from pgstatindex('demo_pk');
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
2 | 1 | 2260992 | 3 | 1 | 274 | 0 | 0 | 89.83 | 0

And do the n=n+1 update:

explain (analyze,verbose,costs,buffers)update demo set n=n+1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Update on public.demo (cost=0.00..2135.00 rows=100000 width=10) (actual time=481.868..481.868 rows=0 loops=1)
Buffers: shared hit=679405 read=760 dirtied=825
-> Seq Scan on public.demo (cost=0.00..2135.00 rows=100000 width=10) (actual time=0.268..16.329 rows=100000 loops=1)
Output: (n + 1), ctid
Buffers: shared hit=885
Planning time: 0.237 ms
Trigger PK_ConstraintTrigger_75314 for constraint demo_pk: time=174.976 calls=99999
Execution time: 663.799 ms

This read more buffers and we can see that an internal trigger (PK_ConstraintTrigger_75314) has been run to re-check the unique constraint at the end of the statement. But only 17% more here for this special case where all rows are updated.

However, a more realistic test case exchanging only two values is much cheaper:


explain (analyze,verbose,costs,buffers) update demo set n=case when n=2 then 2000 when n=2000 then 2 end where n in (2,2000);
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Update on public.demo (cost=8.85..16.60 rows=2 width=10) (actual time=0.079..0.079 rows=0 loops=1)
Buffers: shared hit=23
-> Bitmap Heap Scan on public.demo (cost=8.85..16.60 rows=2 width=10) (actual time=0.016..0.055 rows=2 loops=1)
Output: CASE WHEN (n = 2) THEN 2000 WHEN (n = 2000) THEN 2 ELSE NULL::integer END, ctid
Recheck Cond: (demo.n = ANY ('{2,2000}'::integer[]))
Heap Blocks: exact=3
Buffers: shared hit=9
-> Bitmap Index Scan on demo_pk (cost=0.00..8.85 rows=2 width=0) (actual time=0.009..0.009 rows=4 loops=1)
Index Cond: (demo.n = ANY ('{2,2000}'::integer[]))
Buffers: shared hit=6
Planning time: 0.137 ms
Trigger PK_ConstraintTrigger_75322 for constraint demo_pk: time=0.005 calls=1
Execution time: 0.120 ms

In my opinion, the overhead here is totally acceptable, especially given the fact that this re-check displays exactly which value violates the constraint in case there is a duplicate.

But I’m going too fast here. I’ve not even started my blog series about access paths where I’ll explain the cost of the execution plans, starting from the most simple: Seq Scan. Follow my blog or twitter to get informed. There will be nothing about ‘which is better, Oracle or Postgres?’. But I’m convinced that knowing the difference helps to understand how it works, and to design an application that has the correct behavior if ported from one to the other.

 

Cet article Postgres unique constraint est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths – intro

Yann Neuhaus - Tue, 2017-08-01 00:00

This is the start of a series on PostgreSQL execution plans, access path, join methods, hints and execution statistics. The approach will compare Postgres and Oracle. It is not a comparison to see which one is better, but rather to see what is similar and where the approaches diverge. I have a long experience of reading Oracle execution plans and no experience at all on Postgres. This is my way to learn and share what I learn. You will probably be interested if you are in the same situation: an Oracle DBA wanting to learn about Postgres. But you may also be an experienced Postgres DBA who wants to see a different point of view from a different ‘culture’.

I’ll probably use the Oracle terms more often as I’m more familiar with them: blocks for pages, optimizer for query planner, rows for tuples, tables for relations…

Please, don’t hesitate to comment on the blog posts or through twitter (@FranckPachot) if you find some mistakes in my Postgres interpretation. I tend to verify any assumption in the same way I do it with Oracle: the documented behavior and the test result should match. My test should be fully reproducible (using Postgres 9.6.2 here with all defaults). But as I said above, I’ve not the same experience as I have on Oracle when interpreting execution statistics.

Postgres

I’m using the latest versions here. Postgres 9.2.6 (as the one I installed here)
I’ve installed pg_hint_plan to be able to control the execution plan with hints. This is mandatory when doing some research. In order to understand an optimizer (query planner) choice, we need to see the estimated cost for different possibilities. Most of my tests will be done with: EXPLAIN (ANALYZE,VERBOSE,COSTS,BUFFERS)

fpa=# explain (analyze,verbose,costs,buffers) select 1;
 
QUERY PLAN
------------------------------------------------------------------------------------
Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
Output: 1
Planning time: 0.060 ms
Execution time: 0.036 ms
(4 rows)

I my go further with unix tools (like strace to see the system calls)

Oracle

I’m using Oracle 12.2 here and the tests are done by running the statement after setting ALTER SESSION SET STATISTICS_LEVEL=ALL and displaying the execution plan with DBMS_XPLAN:
select * from dbms_xplan.display_cursor(format=>'+cost allstats last -plan_hash +projection');
Note that if you are in lower Oracle versions, you need to call dbms_xplan through the table() function:
select * from table(dbms_xplan.display_cursor(format=>'+cost allstats last -plan_hash +projection'));
Example:

SQL> set arraysize 5000 linesize 150 trimspool on pagesize 1000 feedback off termout off
SQL> alter session set statistics_level=all;
SQL> select 1 from dual;
SQL> set termout on
SQL> select * from dbms_xplan.display_cursor(format=>'+cost allstats last -plan_hash +projection');
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 520mkxqpf15q8, child number 0
-------------------------------------
select 1 from dual
--------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (100)| 1 |00:00:00.01 |
| 1 | FAST DUAL | | 1 | 1 | 2 (0)| 1 |00:00:00.01 |
--------------------------------------------------------------------------------------

I’ll probably never compare the execution time, as this depends on the system and makes no sense on artificial small examples. But I’ll try to compare all other statistics: estimated cost, the actual number of pages/blocks read, etc.

Table of content

I’ll update (or rather insert /*+ append */) the links to the series posts as soon as they are published.

  1. Postgres vs. Oracle access paths I – Seq Scan
  2. Postgres vs. Oracle access paths II – Index Only Scan
 

Cet article Postgres vs. Oracle access paths – intro est apparu en premier sur Blog dbi services.

PostgreSQL on Cygwin

Yann Neuhaus - Mon, 2017-07-31 23:00

I run my laptop with Windows 10 for office programs, and VirtualBox machines with Linux for the big stuff (Oracle databases). I have also Cygwin installed on Windows for GNU programs. I wanted to quickly install PosgreSQL and rather than installing it in a Linux VM, or as a Windows program, I installed the Cygwin version of it. Here is how.

Cygwin

Cygwin is easy to install: just run the setup-x86_64.exe from https://www.cygwin.com/ and choose the packages you want to install. Here is what is related to PostgreSQL:
CapturePGCY0001

Note that if you want to install postgres extensions you may need pg_config and you need to install the libpd-devel in addition to postgresql-devel. And gcc and make. Those are not displayed in the screenshot above but you may get something like the following, if you don’t have them, when installing an extension:
pg_config: Command not found

Of course, PostgreSQL is Open Source and you can also compile it yourself.

Cygserver

Cygwin can run daemons through a Windows service (Cygserver) and you need to set it up if not already done. For this step, you will need to run the Cygwin Terminal as Administrator.
fpa@dell-fpa ~
$ /usr/bin/cygserver-config
Overwrite existing /etc/cygserver.conf file? (yes/no) yes
Generating /etc/cygserver.conf file
 
Warning: The following function requires administrator privileges!
 
Do you want to install cygserver as service?
(Say "no" if it's already installed as service) (yes/no) yes
 
The service has been installed under LocalSystem account.
To start it, call `net start cygserver' or `cygrunsrv -S cygserver'.
 
Further configuration options are available by editing the configuration
file /etc/cygserver.conf. Please read the inline information in that
file carefully. The best option for the start is to just leave it alone.
 
Basic Cygserver configuration finished. Have fun!

You start this service as any Windows service:

fpa@dell-fpa ~
$ net start cygserver
The CYGWIN cygserver service is starting.
The CYGWIN cygserver service was started successfully.

You can check from that the service is running:

fpa@dell-fpa ~
$ cygstart services.msc

CapturePGCY0002

PostgreSQL database cluster

Here is the creation of the PostgreSQL database cluster.
fpa@dell-fpa ~
$ /usr/sbin/initdb -D /usr/share/postgresql/data
The files belonging to this database system will be owned by user "fpa".
This user must also own the server process.
 
The database cluster will be initialized with locale "C".
The default database encoding has accordingly been set to "SQL_ASCII".
The default text search configuration will be set to "english".
 
Data page checksums are disabled.
 
creating directory /usr/share/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 30
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
 
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
 
Success. You can now start the database server using:
 
/usr/sbin/pg_ctl -D /usr/share/postgresql/data -l log.txt start

Start PostgreSQL database server

I add my network onto the /usr/share/postgresql/data/postgresql.conf

# IPv4 local connections:
host all all 127.0.0.1/32 trust
host all all 192.168.78.0/24 trust

I define the interface and port where the server listen in /usr/share/postgresql/data/postgresql.conf

listen_addresses = 'localhost,192.168.78.1' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 30 # (change requires restart)

Now ready to start the PostgreSQL server:
fpa@dell-fpa ~
$ /usr/sbin/pg_ctl -D /usr/share/postgresql/data -l log.txt start
server starting

Username

My Windows username is ‘FPA’ and so is the Cygwin user which started the database server and I check that I can connect to the maintenance database with this user:

fpa@dell-fpa ~
$ psql -U fpa postgres
psql (9.6.2)
Type "help" for help.
 
postgres=# \du
 
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
fpa | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 
postgres=# quit

PgAdmin

As I am on Windows, I install the graphical console PgAdmin and setup the connection to this database:
CapturePGCY0003

SQL Developer

As an Oracle fan, I prefer to connect with SQL Developer. Just download the JDBC driver for PostgreSQL: https://jdbc.postgresql.org/download.html

In SQL Developer you can declare this .jar from Tools -> Preferences -> Third Party JDBC Drivers

CapturePGCY0004

And create the connection with the new ‘PostgreSQL’ tab:

CapturePGCY0005
Then with ‘Choose Database’ you can fill the dropbox and choose the database you want to connect to.

As I have no database with the same name as the username, I have to mention the database name at the end of the hostname, suffixed with ‘?’ to get the proper JDBC url. And what you put in the dropbox will be ignored. I don’t really know the reason, but this is how I got the correct url.

CapturePGCY0006

Extensions

You can install extensions. For example, I’ve installed pg_hint_plan to be able to hint the access path and join methods.

wget https://osdn.net/dl/pghintplan/pg_hint_plan96-1.2.1.tar.gz
tar -zxvf pg_hint_plan96-1.2.1.tar.gz
cd pg_hint_plan96-1.2.1
make
make install

And I’m now able to load it:

$ psql
psql (9.6.2)
Type "help" for help.
 
fpa=# load 'pg_hint_plan';
LOAD

But Why?

You may wonder why I don’t install it directly on Linux. My laptop is on Windows and, of course, I have a lot of VirtualBox VMs. But this doesn’t require to start a VM.
You may wonder why I don’t install the Windows version? I want to investigate the linux behaviour. And I may want to trace the postgres processes. For example, cygwin has a strace.exe which shows similar output as strace on Linux. Here is the I/O calls from a full table scan (Seq Scan):
CaptureStraceCygwinPostgres
I can see that postgres sequential reads are done through one lseek() and sequential 8k read().

This was simple. Just get the pid of the session process:

fpa=# select pg_backend_pid();
pg_backend_pid
----------------
11960

and strace it:

$ strace -p 11960

I’ve done that in about one hour: download, install, setup and write this blog post. Without any virtual machine, you can have a Linux Postgres database server running on Windows.

 

Cet article PostgreSQL on Cygwin est apparu en premier sur Blog dbi services.

Error ora-01640 making tablespace read only without active transaction

Tom Kyte - Mon, 2017-07-31 15:46
Hi Tom, We are trying put a tablespace into readonly mode and getting error ora-01640. There is no transaction as the database was stopped and started in normal mode.. Only one session was established from which table space readonly was executed....
Categories: DBA Blogs

How to create FIFO report based on evaluation system

Tom Kyte - Mon, 2017-07-31 15:46
I want to implement FIFO Stock model in my inventory. So I face some problems but before staring query i describe my physical structure of the table. <code>create table purchase_mas ( stock_id number pur_date date, product_code number...
Categories: DBA Blogs

PLS-00302 using XMLTABLE

Tom Kyte - Mon, 2017-07-31 15:46
I'm getting PLS-00302 when trying to select from a chunk of XML using XMLTYPE and XMLTABLLE. This works, and I get the desired output... <code> SELECT xt.* FROM (SELECT xmltype ( '<TABLE> <TR><TD>Organization</TD><TD>Organization Title</TD>...
Categories: DBA Blogs

How to do a EXP full excluding some schemas?

Tom Kyte - Mon, 2017-07-31 15:46
Hello Oracle Masters, Thanks a lot for taking time to answer questions. I know that it is possible to do an export full datapump and use the exclude clause. Since I have space limitations, I'm using original export with exp_pipe. It's possi...
Categories: DBA Blogs

loading jar files into Oracle

Tom Kyte - Mon, 2017-07-31 15:46
Could you please provide an example that 1) Loads a jar file into oracle [any permissions and path variables that needs to be set-up using DBAs help] 2) Load a class that uses the above jar file 3) Create a PL/SQL procedure or function that uses ...
Categories: DBA Blogs

How to Compile Code on all Connections

Tom Kyte - Mon, 2017-07-31 15:46
Hi, Scenario: We are using 6 different schemas and we are compiling codes on all of them. So first, I connect to those schemas, then when I need to compile my codes, what i do is: Select Schema, Compile, Repeat. I was wondering if there's a tec...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator