Feed aggregator

Oracle VBCS - Pay As You Go Cloud Model Experience Explained

Andrejus Baranovski - Thu, 2018-07-19 14:03
If you are considering starting using VBCS cloud service from Oracle, may be this post will be useful. I will share my experience with pay as you go model.

Two payment models are available:

1. Pay As You Go - good when accessing VBCS time to time. Can be terminated at any time
2. Monthly Flex - good when need to run VBCS 24/7. Requires commitment, can't be terminated at any time

When you create Oracle Cloud account, initially you will get 30 days free trial period. At the end of that period (or earlier), you can upgrade to billable plan. To upgrade, go to account management and choose to upgrade promotional offer - you will be given choice to go with Pay As You Go or Monthly Flex:


As soon as you upgrade to Pay As You Go, you will start seeing monthly usage amount in the dashboard. Also it shows hourly usage of VBCS instance, for the one you will be billed:


Click on monthly usage amount, you will see detail view per each service billing. When VBCS instance is stopped (in case of Pay As You Go) - you will be billed only for hardware storage (Compute Classic) - this is relatively very small amount:


There are two options, how you can create VBCS instance - either autonomous VBCS or customer managed VBCS. To be able to stop/start VBCS instance and avoid billing when instance is not used (in case of Pay As You Go) - make sure to go with customer managed VBCS. In this example, VBCS instance was used only for 1 hour and then it was stopped, it can be started again at anytime:


To manage VBCS instance, you would need to navigate to Oracle Cloud Stack UI. From here you can start stop both DB and VBCS in single action. It is not enough to stop VBCS, make sure to stop DB too, if you are not using it:

Playing With Service Relocation 12c

Michael Dinh - Thu, 2018-07-19 09:14
With 12c, use verbose to display services running.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl -V
srvctl version: 12.1.0.2.0

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

There is option to provide comma delimited list of services to check the status.
Unfortunately, option is not available for relocation which I failed to understand.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p11,p12,p13,p14"
Service p11 is running on instance(s) hawk1
Service p12 is running on instance(s) hawk1
Service p13 is running on instance(s) hawk1
Service p14 is running on instance(s) hawk1

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p21,p22,p23,p24,p25"
Service p21 is running on instance(s) hawk2
Service p22 is running on instance(s) hawk2
Service p23 is running on instance(s) hawk2
Service p24 is running on instance(s) hawk2
Service p25 is running on instance(s) hawk2

Puzzled that status for services is able to use delimited list where as relocation is not.

I have blogged about new features for service failover: 12.1 Improved Service Failover

Another test shows that it’s working as it should be.

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 ~]$ srvctl stop instance -d hawk -instance hawk1 -failover

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$


[root@racnode-dc1-1 ~]# crsctl stop crs
[root@racnode-dc1-1 ~]# crsctl start crs


[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

However, the requirement is to relocate services versus failover.

Here are scripts and demo for that.

Script will only work for 2-nodes RAC and service is running on 1 instance only.

[oracle@racnode-dc1-1 ~]$ srvctl config service -d hawk |egrep 'Service name|instances'
Service name: p11
Preferred instances: hawk1
Available instances: hawk2
Service name: p12
Preferred instances: hawk1
Available instances: hawk2
Service name: p13
Preferred instances: hawk1
Available instances: hawk2
Service name: p14
Preferred instances: hawk1
Available instances: hawk2
Service name: p21
Preferred instances: hawk2
Available instances: hawk1
Service name: p22
Preferred instances: hawk2
Available instances: hawk1
Service name: p23
Preferred instances: hawk2
Available instances: hawk1
Service name: p24
Preferred instances: hawk2
Available instances: hawk1
Service name: p25
Preferred instances: hawk2
Available instances: hawk1
[oracle@racnode-dc1-1 ~]$
DEMO:
[oracle@racnode-dc1-1 rac_relocate]$ ls *relocate*.sh
relocate_service.sh  validate_relocate_service.sh

[oracle@racnode-dc1-1 rac_relocate]$ ls *restore*.sh
restore_service_instance1.sh  restore_service_instance2.sh
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ SAVE SERVICES LOCATION AND PREVENT ACCIDENTAL OVERWRITE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org

[oracle@racnode-dc1-1 rac_relocate]$ chmod 400 /tmp/service.org; ll /tmp/service.org; cat /tmp/service.org
-r-------- 1 oracle oinstall 222 Jul 18 14:54 /tmp/service.org
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org
-bash: /tmp/service.org: Permission denied
[oracle@racnode-dc1-1 rac_relocate]$

	
========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 1 TO 2

Validate is similar to RMAN validate.
No relocation is performed and only syntax is provided for verification.
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh
./validate_relocate_service.sh: line 4: 1: ---> USAGE: ./validate_relocate_service.sh -db_unique_name -oldinst# -newinst#

[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh hawk 1 2
+ OUTF=/tmp/service_1.conf
+ srvctl status instance -d hawk -instance hawk1 -v
+ ls -l /tmp/service_1.conf
-rw-r--r-- 1 oracle oinstall 109 Jul 18 14:59 /tmp/service_1.conf
+ cat /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ set +x

**************************************
***** SERVICES THAT WILL BE RELOCATED:
**************************************
srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2


[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 1 2
-rw-r--r-- 1 oracle oinstall 109 Jul 18 15:00 /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 2 TO 1
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 2 1
-rw-r--r-- 1 oracle oinstall 129 Jul 18 15:02 /tmp/service_2.conf
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p21 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RESTORE SERVICES FOR INSTANCE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh
./restore_service_instance2.sh: line 4: 1: ---> USAGE: ./restore_service_instance2.sh -db_unique_name

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh hawk
+ srvctl relocate service -d hawk -service p21 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk1 -newinst hawk2
+ set +x

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 rac_relocate]$
CODE:


========================================================================
+++++++ validate_relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
set -x
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
set +x
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
echo
echo "**************************************"
echo "***** SERVICES THAT WILL BE RELOCATED:"
echo "**************************************"
for s in ${svc}
do
echo "srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}"
done
exit

========================================================================
+++++++ relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}
set +x
done
set -x
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v
srvctl status instance -d ${DB} -instance ${DB}${NEW} -v
set +x
exit

========================================================================
+++++++ restore_service_instance1.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`head -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}2 -newinst ${DB}1
set +x
done
exit

========================================================================
+++++++ restore_service_instance2.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`tail -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}1 -newinst ${DB}2
set +x
done
exit

How to install Docker Enterprise Edition on CentOS 7 ?

Yann Neuhaus - Thu, 2018-07-19 07:54

In this blog we are going to see how to install Docker EE trial edition on CentOS 7 hosts. As you may know or not, Docker has two editions: Docker Community Edition  (CE) and Docker Enterprise Edition (EE). To make it simple, let’s say that Docker EE is designed for production environment. More infos here.

 

This will be our architecture:

  • 1 manager node
    • hostname: docker-ee-manager1
  • 1 worker node + Docker Trust Registry (DTR) node
    • hostname: docker-ee-worker1

Both nodes should be in the same network range.

We will assume that CentOS 7 is already installed on all hosts:

[root@docker-ee-manager1 ~] cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

[root@docker-ee-worker1 ~]$ cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

 Create docker user and group
[root@docker-ee-manager1 ~]$ groupadd docker
[root@docker-ee-manager1 ~]$ useradd -g docker docker
[root@docker-ee-manager1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-manager1 ~]$ su - docker

Do the same on worker

[root@docker-ee-worker1 ~]$ groupadd docker
[root@docker-ee-worker1 ~]$ useradd -g docker docker
[root@docker-ee-worker1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-worker1 ~]$ su - docker

 

 

Get the Docker URL for installating Docker EE

Then you need to go to this link, make sure that you already have a Docker account, it’s free you can make one very quickly.

 

 

Fill the formula and you will have access to this :

 

dockerstore

storebits

 

Copy the url and save the license key in a safe location, you will need it later.

 

1. Configure Docker URL
[docker@docker-ee-manager1 ~]$ export DOCKERURL="<YOUR_LINK>"
[docker@docker-ee-manager1 ~]$ sudo -E sh -c 'echo "$DOCKERURL/centos" > /etc/yum/vars/dockerurl'

 

2. We install required packages
[docker@docker-ee-manager1 ~]$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

 

3. Add the Docker-EE repository
[docker@docker-ee-manager1 ~]$ sudo -E yum-config-manager \
--add-repo \
"$DOCKERURL/centos/docker-ee.repo"

 

4. Install docker-ee package
[docker@docker-ee-manager1 ~]$ sudo yum -y install docker-ee
[docker@docker-ee-manager1 ~]$ sudo systemctl enable docker.service
[docker@docker-ee-manager1 ~]$ sudo systemctl start docker.service

 

Repeat step 1 to 4 for worker1 node

 

Install UCP on manager

Simple command, just run this on your manage

 

[docker@docker-ee-manager1 ~]$ docker container run --rm -it --name ucp   -v /var/run/docker.sock:/var/run/docker.sock   docker/ucp:3.0.2 install   --host-address <YOUR_IP>   --interactive
INFO[0000] Your engine version 17.06.2-ee-15, build 64ddfa6 (3.10.0-514.el7.x86_64) is compatible with UCP 3.0.2 (736cf3c)
Admin Username: admin
Admin Password:
Confirm Admin Password:
WARN[0014] None of the hostnames we'll be using in the UCP certificates [docker-ee-manager1 127.0.0.1 172.17.0.1 <YOUR_IP>] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0000] Found existing UCP config com.docker.ucp.config-2
Do you want to proceed with the install with config com.docker.ucp.config-2? (y/n): y
y
INFO[0032] Installing UCP with host address 10.29.14.101 - If this is incorrect, please specify an alternative address with the '--host-address' flag
INFO[0032] Deploying UCP Service... (waiting for all 2 nodes to complete)
INFO[0083] Installation completed on docker-ee-manager1 (node uvzvuefehznf22k4wa5zg9cy1)
INFO[0083] Installation completed on docker-ee-worker1 (node z7gq7z3336jnwcyojyqq1h3wa)
INFO[0083] UCP Instance ID: x0fg0phnkgzm5730thoncucn2
INFO[0083] UCP Server SSL: SHA-256 Fingerprint=E6:2F:38:69:5D:26:A8:06:D3:8B:11:69:D9:DC:3A:77:CE:16:EA:23:9C:D0:D8:8F:34:D6:97:9D:4B:D2:E2:D2
INFO[0083] Login to UCP at https://<YOUR_IP>1:443
INFO[0083] Username: admin
INFO[0083] Password: (your admin password)

Ignore if there is a insecure message and accept exception. We can see the UCP admin interface. Enter your credentials and upload your license key ucplogin     ucpinterface  

Adding a worker node

  clickonnodes   And click on Add Node Addnode Then tell UCP that you want to deploy a new worker node and copy the command displayed. chooseWorker     Connect to the worker and run this command to join the worker to the cluster

[docker@docker-ee-worker1 ~]$ docker swarm join --token SWMTKN-1-4kt4gyk00n69tiywlzhst8dwsgo4oblylnsl1aww2048isi44u-7j9hmcrsn3lr048yu30xlnsv7 <IP_OF_MANAGER>:2377
This node joined a swarm as a worker.

 

Now, we have two nodes: one manager and one worker

 

Now2nodes

 

Install Docker Trusted Registry

 

The docker EE includes a DTR which is a secure registry where you can store your docker images.  The DTR will be installed on the worker node, it’s not recommended to install it on a manager node.

To install it, you just need to run this command:

[docker@docker-ee-worker1 ~]$ docker run -it --rm docker/dtr install --ucp-node docker-ee-worker1 --ucp-url https://<IP_OF_MANAGER> --ucp-username admin --ucp-password <YOUR_PASSWORD> --ucp-ca "-----BEGIN CERTIFICATE-----
MIIBggIUJ+Y+MFXH1XcyJnCU4ACq26v5ZJswCgYIKoZIzj0EAwIw
HTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMB4XDTE4MDcxOTA4MjEwMFoX
DTIzMDcxODA4MjEwMFowHTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMFkw
EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEDJxHOIhHoV4NBZGnEQClFShjQfpoL5mQ
LH7E6x6GL4AexYtdWgGIcOlV2NXQpdadBK9cZG2z6r7+zwCj7EP/iqNFMEMwDgYD
VR0P7ojp1CIMAoGCCqGSM49BAMCA0gAMEUCIQDqbBiCqXgFdtIb6uP9
EdDTI1YGWn97AFPU+YJ9s1/CSAIgBsqIn1v7BVNjJ3AeUQfo1d8Kfc//ZwHYr4XW
uWIHmkM=
-----END CERTIFICATE-----"

You can find the certificate here: https://<IP_OF_MANAGER>/manage/settings/certs

findCertificate

Then go to the DTR URL which is https://<IP_OF_WORKER>  and enter your credentials

 

DTRLogin

 

 

 

Here we are:

 

DTRUI

Congratulations, you just have installed Docker EE. Hope this helps  :-)

 

 

Cet article How to install Docker Enterprise Edition on CentOS 7 ? est apparu en premier sur Blog dbi services.

Google Cloud Spanner – inserting data

Yann Neuhaus - Thu, 2018-07-19 04:17

In a previous post I’ve created a Google Cloud Spanner database and inserted a few rows from the GUI. This is definitely not a solution fo many rows and here is a post about using the command line.

If I start the Google Shell from the icon on the Spanner page for my project, everything is set. But if I run it from elsewhere, using the https://console.cloud.google.com/cloudshell as I did in A free persistent Google Cloud service with Oracle XE I have to set the project:

franck_pachot@cloudshell:~$ gcloud config set project superb-avatar-210409
Updated property [core/project].
franck_pachot@superb-avatar-210409:~$

Instance

I create my Spanner instance with 3 nodes across the world:
¨
franck_pachot@superb-avatar-210409:~$ time gcloud spanner instances create franck --config nam-eur-asia1 --nodes=3 --description Franck
Creating instance...done.
 
real 0m3.940s
user 0m0.344s
sys 0m0.092s

Database

and Spanner database – created in 6 seconds:

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases create test --instance=franck
Creating database...done.
&nbssp;
real 0m6.832s
user 0m0.320s
sys 0m0.128s

Table

The DDL for table creation can also be run from there:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO1 ( ID1 int64, TEXT string(max) ) primary key (ID1)'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty

I’m now ready to insert one million rows. Here is my table:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl describe test --instance=franck
--- |-
CREATE TABLE DEMO1 (
ID1 INT64,
TEXT STRING(MAX),
) PRIMARY KEY(ID1)

Insert

The gcloud command line has a limited insert possibility:

franck_pachot@superb-avatar-210409:~$ time for i in $(seq 1 1000000) ; do gcloud beta spanner rows insert --table=DEMO1 --database=test --instance=franck --data=ID1=${i},TEXT=XXX${i} ; done
commitTimestamp: '2018-07-18T11:09:45.065684Z'
commitTimestamp: '2018-07-18T11:09:50.433133Z'
commitTimestamp: '2018-07-18T11:09:55.752857Z'
commitTimestamp: '2018-07-18T11:10:01.044531Z'
commitTimestamp: '2018-07-18T11:10:06.285764Z'
commitTimestamp: '2018-07-18T11:10:11.106936Z'
^C

Ok, let’s stop there. Calling a service for each row is not efficient with a latency of 5 seconds.

API

I’ll use the API from Python. Basically, a connection is a Spanner Client:

franck_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> spanner_client = spanner.Client()
>>> instance = spanner_client.instance('franck')
>>> database = instance.database('test')
>>>

Batch Insert

With this I can send a batch of rows to insert. Here is the full Python script I used to insert one million, by batch of 1000 rows:

from google.cloud import spanner
spanner_client = spanner.Client()
instance = spanner_client.instance('franck')
database = instance.database('test')
for j in range(1000):
records=[] for i in range(1000):
records.append((1+j*1000+i,u'XXX'+str(i)))
with database.batch() as batch:
batch.insert(table='DEMO1',columns=('ID1', 'TEXT',),values=records)

This takes 2 minutes:

franck_pachot@superb-avatar-210409:~$ time python3 test.py
 
real 2m52.707s
user 0m21.776s
sys 0m0.668s
franck_pachot@superb-avatar-210409:~$

If you remember my list of blogs on Variations on 1M rows insert that’s not so fast. But remember that rows are distributed across 3 nodes in 3 continents but here inserting with constantly increasing value have all batched rows going to the same node. The PRIMARY KEY in Google Spanner is not only there to declare a constraint but also determines the organization of data.

Query

The select can also be run from there from a read-only transaction called ‘Snapshot’ because it is doing MVCC consistent reads:

frank_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> with spanner.Client().instance('franck').database('test').snapshot() as snapshot:
... results = snapshot.execute_sql('SELECT COUNT(*) FROM DEMO1')
... for row in results:
... print(row)
...
[1000000]

The advantage of the read-only transaction is that it can do consistent reads without locking. The queries executed in a read-write transaction have to acquire some locks in order to guarantee consistency when reading across multiple nodes.

Interleave

So, you can look at the PRIMARY KEY as a partition by range, and we have also reference partitioning with INTERLEAVE IN PARENT. This reminds me of the Oracle CLUSTER segment that is so rarely used because storing the tables separately is finally the better compromise on performance and flexibility for a multi-purpose database.

Here is my creation of DEMO2 where ID1 is a foreign key referencing DEMO1

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO2 ( ID1 int64, ID2 int64, TEXT string(max) ) primary key (ID1,ID2), interleave in parent DEMO1 on delete cascade'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty
 
real 0m24.418s
user 0m0.356s
sys 0m0.088s

I’m now inserting 5 detail rows per each parent row:

from google.cloud import spanner
database = spanner.Client().instance('franck').database('test')
for j in range(1000):
records=[] for i in range(1000):
for k in range(5):
records.append((1+j*1000+i,k,u'XXX'+str(i)+' '+str(k)))
with database.batch() as batch:
batch.insert(table='DEMO2',columns=('ID1','ID2','TEXT'),values=records)

This ran in 6 minutes.

Join (Cross Apply)

Here is the execution plan for

SELECT * FROM DEMO1 join DEMO2 using(ID1) where DEMO2.TEXT=DEMO1.TEXT

where I join the two tables and apply a filter on the join:
CaptureSpannerCrossApply

Thanks to the INTERLEAVE the join is running locally. Each row from DEMO1 (the Input of the Cross Apply) is joined with DEMO2 (the Map of Cross Apply) locally. Only the result is serialized. On this small number of rows we do not see the benefit from having the rows in multiple nodes. There are only 2 nodes with rows here (2 local executions) and probably one node contains most of the rows. The average time per node is 10.72 seconds and the elapsed time is 20.9 seconds, so I guess that one node ran un 20.9 seconds and the other in 1.35 only.

The same without the tables interleaved (here as DEMO3) is faster to insert but the join will be more complex where DEMO1 must be distributed to all nodes.
CaptureSpannerDistributedCrossApply
Without interleave, the input table of the local Cross Apply is a Batch Scan, which is actually like a temporary table distributed to all nodes (seems to have 51 chunks here), created by the ‘Create Batch’. This is called Distributed Cross Applied.

So what?

Google Spanner has only some aspects of SQL and Relational databases. But it is still, like the NoSQL databases, a database where the data model is focused at one use case only because the data model and the data organization have to be designed for specific data access.

 

Cet article Google Cloud Spanner – inserting data est apparu en premier sur Blog dbi services.

SYS_CONTEXT('userenv','module') behaviour in Database Vault

Tom Kyte - Thu, 2018-07-19 01:46
Hello Tom, I have implemented DB Vault on a 12.2.0.1.0 Oracle database. I created a Vault policy to block adhoc access to application schema using DB tools like Toad etc. The policy should allow only application connection to DB from application s...
Categories: DBA Blogs

Speaking At DOAG 2018 Conference And IT Tage 2018

Randolf Geist - Wed, 2018-07-18 15:29
I will be speaking at the yearly DOAG conference in December as well as at the IT Tage in November. My talk will be "Oracle Optimizer System Statistics Update 2018" where I summarize the history and current state of affairs regarding System Statistics and I/O calibration in recent Oracle versions like 12c and 18c.

A Quick Look At What's New In Oracle JET v5.1.0

OTN TechBlog - Wed, 2018-07-18 12:11

On June 18th, the v5.1.0 release of Oracle JET was made available. It was the 25th consecutive on-schedule release for Oracle JET. Details on the release schedule are provided here in the FAQ.

As indicated by the release number, v5.1.0 is a minor release, aimed at tweaking and consolidating features throughout the toolkit. As in other recent releases, new features have been added to support development of composite components, following the Composite Component Architecture (CCA). For details, see the entry on the new Template Slots in Duncan Mills's blog. Also, take note of the new design time metadata, as described in the release notes

Aside from the work done in the CCA area, the key new features and enhancements to be aware of in the release are listed below, sorted alphabetically:

Component Enhancement Description oj-chart New "data" attribute. Introduces new attributes, slots, and custom elements. oj-film-strip New "looping" attribute. Specifies filmstrip navigation behavior, bounded ("off) or looping ("page"). oj-form-layout Enhanced content flexibility. Removes restrictions on the types of children allowed in the "oj-form-layout" component. oj-gantt New "dnd" attribute and "ojMove" event.  Provides new support for moving tasks via drag and drop. oj-label-value New component. Provides enhanced layout flexibility for the "oj-form-layout" component. oj-list-view Enhanced "itemTemplate" slot. Supports including the <LI> element in the template. oj-swipe-actions New component. Provides a declarative way to add swipe-to-reveal functionality to items in the "oj-list-view" component.

For all the details on the items above, see the release notes.

Note: Be aware that in Oracle JET 7.0.0, support for Yeoman and Grunt will be removed from generator-oraclejet and ojet-cli. As a consequence, the ojet-cli will be the only way to use the Oracle JET tooling, e.g., to create new Oracle JET projects from that point on. Therefore, if you haven't transferred from using Yeoman and Grunt to ojet-cli yet, e.g., to command line calls such as "ojet create", take some time to move in that direction before the 7.0.0 release.

As always, your comments and constructive feedback are welcome. If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums and also follow @OracleJET on Twitter.

For organizations using Oracle JET in production, you're invited to be highlighted on the Oracle JET site, with the latest addition being a brand new Customer Success Story by Cagemini.

On behalf of the entire Oracle JET development team: "Happy coding!"

Critical Patch Update for July 2018 Now Available

Steven Chan - Wed, 2018-07-18 10:09

The Critical Patch Update (CPU) for July 2018 was released on July 17, 2018. Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. 

Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.

The Critical Patch Update Advisory is available at the following location:

It is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches.

The next four Critical Patch Update release dates are:

  • October 16, 2018
  • January 15, 2019
  • April 16, 2019
  • July 16, 2019
References Related Articles
Categories: APPS Blogs

Oracle Expands Challenger Series with Chicago Event at XS Tennis Village

Oracle Press Releases - Wed, 2018-07-18 10:00
Press Release
Oracle Expands Challenger Series with Chicago Event at XS Tennis Village Free event reaffirms Oracle’s commitment to providing unparalleled opportunities for American players

Redwood Shores, Calif.—Jul 18, 2018

Continuing its strong support for American tennis, Oracle announced today it is adding a Chicago event to the Oracle Challenger Series, to be held at XS Tennis Village September 2-9, 2018 in conjunction with the Association of Tennis Professionals (ATP) and the Women’s Tennis Association (WTA).

The Oracle Challenger Series launched earlier this year with events in Newport Beach and Indian Wells, California, with the mission of providing unparalleled opportunities for up-and-coming American tennis players to secure both ranking points and prize money.

As part of Oracle’s commitment to growing the game of tennis nationally, the Oracle Challenger Series will look to make a positive impact on the communities where its events are held by donating $5,000 to the local Chicago chapter of the National Junior Tennis and Learning (NJTL) network. The NJTL provides free or low-cost tennis and education programming to more than 225,000 under-resourced youth in the United States.

“We’re adding Challenger tournaments because American tennis players need more chances to compete at home and make a career out of the sport,’’ said Oracle CEO Mark Hurd. “Oracle also wants to improve the quality of tennis. We’re deeply committed to the sport and as part of the new event in Chicago, we’re providing assistance to create better access for Americans to play tennis.’’ 

“We are thrilled to welcome the Oracle Challenger Series to Chicago this September,” said Kamau Murray, President and CEO of XS Tennis Village and Executive Director of XS Tennis and Education Foundation. “We’re proud to work with Oracle on this great event and support the incredible work that they do to promote American tennis at all levels of the game. Their commitment to the sport goes hand-in-hand with our mission at XS to provide a positive pathway to future success through tennis.”

The Chicago tournament will be a joint ATP Challenger Tour/WTA 125K Series event and pay equal prize money ($150,000 per Tour) for a total of $300,000. Both the women’s and men’s draws will consist of 32 singles players, 16 qualifying players and 16 doubles teams. The event will be free and open to the public.

The 2018-2019 Oracle Challenger Series will begin in Chicago, with additional events to be added at a later date. The Series will culminate at the 2019 BNP Paribas Open, the largest ATP World Tour and WTA combined two-week event in the world, held annually at the Indian Wells Tennis Garden, where the two American women and men who accumulate the most points over the course of the Series will receive wildcards into their respective singles main draws.

The Oracle Challenger Series builds on Oracle’s commitment to help support U.S. tennis at both the professional and collegiate level. Oracle sponsors the Oracle US Tennis Awards, two $100,000 grants awarded annually to assist young players as they transition from college into the professional ranks. In addition to sponsoring the Intercollegiate Tennis Association rankings, Oracle also hosts the Oracle ITA Masters tournament in Malibu, California and the Oracle ITA National Fall Championships which will be held at the Surprise Tennis Center in Surprise, Arizona in 2018.

For more information about the Oracle Challenger Series, visit www.oraclechallengerseries.com.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Triggers on materialized views

Tom Kyte - Wed, 2018-07-18 07:26
Are triggers on materialized views supported by oracle? If so, is a good practice to use them?
Categories: DBA Blogs

IN & EXISTS

Tom Kyte - Wed, 2018-07-18 07:26
Tom: can you give me some example at which situation IN is better than exist, and vice versa.
Categories: DBA Blogs

Initializing Contexts after getting a connection from a ConnectionPool

Tom Kyte - Wed, 2018-07-18 07:26
I understand best practice is to initialize a connection from a connection pool by clearing the Application Context. However,there are multiple namespaces. Should every namespace be cleared? How does one find the name of every namespace?
Categories: DBA Blogs

DDL for objects

Tom Kyte - Wed, 2018-07-18 07:26
Hi, I wanted to get the DDL for all objects in a database schema. I'm aware of DBMS_METADATA.GET_DDL to get the DDL from a PL/SQL block but was facing the bellow issue: The return type is HUGECLOB and I need it in a directly viewable form simil...
Categories: DBA Blogs

Oracle Database Performance tuning using Application Developer ( Application user)

Tom Kyte - Wed, 2018-07-18 07:26
This question is from the context when App-server and Oracle Database is hosted on a vendor cloud. They are in a separate container and application will be in a separate container. Example, App-server and DB on Oracle cloud and GUI on customer's ...
Categories: DBA Blogs

V5 Systems Makes Cities Safer with Oracle Cloud

Oracle Press Releases - Wed, 2018-07-18 07:00
Press Release
V5 Systems Makes Cities Safer with Oracle Cloud Outdoor Industrial IoT platform company uses Oracle Cloud Infrastructure to quickly scale up security while dramatically reducing costs

Redwood Shores, Calif.—Jul 18, 2018

To help make cities safer quickly, V5 Systems has implemented Oracle Cloud Infrastructure as part of its security solution. As the pioneer of the world’s first self-powered outdoor security and computing platform, V5 Systems helps cities around the world address critical security issues.

Nestled in Silicon Valley, the City of Hayward was experiencing theft and drug crime around City Hall due to open areas and its close proximity to the main rail transportation for the Bay Area. They wanted to add in video surveillance as an added security layer but to do so needed access to power and communications. There was no fixed power or communications infrastructure where crime was happening and City Hall had just been renovated so trenching was not an option. The City of Hayward was able to implement V5 Systems’ portable video surveillance in less than 30 minutes per unit. Hayward avoided nearly $1 million in trenching fees and 911 calls dropped 60 percent within the first three months of deployment.

V5 Systems needed a cloud provider that met its needs of delivering real-time security to its customers. After reviewing a number of major cloud providers, V5 Systems discovered that although costs of most providers initially appeared low, the data retrieval and transmission costs critical to a video monitoring solution were high. With the enterprise-grade performance of Oracle Cloud Infrastructure, V5 Systems can scale-up any individual deployment if the processing and storage requirements of a security unit or customer increase, as well as scale-out capacity to serve additional customers as their needs grow. The company can better control its costs, and in turn, offer more affordable solutions to its end-customers.

“Our customers need consistent access to our service, and regularly monitor video, so outbound data performance and cost is important,” said Steve Yung, CEO, V5 Systems. “Traditionally video and sensor information has to run through multiple channels before first responders are notified. At a critical time, this delay could make a huge impact on the outcome of the security situation. The performance Oracle delivers has a significant impact on the outbound data so response time for our customers can be faster.”

V5 Systems rapidly enables and supports outdoor Industrial IoT applications. Its customers rely on V5 Systems’ mobile alerts to warn of threats and potential issues in real-time through 24/7 video analytics, AI-driven acoustic gunshot sensors and chemical detection. Several of its customers are using its application built on Oracle and V5 is in the process of transitioning other customers. Leveraging the flexibility and agility of the cloud, V5 Systems is also actively building tools that will spin up customized portals for new customers in minutes, significantly cutting down the traditionally lengthy process of configuring security.

“You can’t put a price on safety. V5 Systems has engineered a revolutionary power system that allows the ability to deploy sophisticated systems and computing systems wirelessly, in any outdoor environment,” said Kash Iftikhar, vice president of product and strategy, Oracle Cloud Infrastructure. “By leveraging Oracle Cloud Infrastructure, V5 Systems is able to deliver the reliability its customers need by conducting analytics and monitoring at the edge in record time so its customers can feel safer; all while achieving significant cost savings for its business.”

Contact Info
Danielle Tarp
Oracle
+1.650.506.2904
danielle.tarp@oracle.com
Quentin Nolibois
Burson-Marsteller PR for Oracle
+1.415.591.4097
quentin.nolibois@bm.com
About V5 Systems

V5 Systems is a ​California-based ​technology company that provides ​leading-edge ​portable, wireless, self-powered outdoor computing and security solutions for Industrial IoT applications. They deliver turnkey video surveillance and gunshot detection solutions that can be deployed in under 30 minutes per unit, while the computing platform itself can act as a host for 3rd party hardware and software integration. These solutions utilize a proprietary power management system which eliminates the need for fixed power and hard-wired communications. V5 Systems develops and optimizes all software and AI analytics to run at the edge, which is instrumental to delivering real-time information to its users. Working with state, local government, education and private enterprises V5 Systems delivers the next generation of Industrial IoT security and computing solutions to the outdoors.

About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure combines the benefits of public cloud (on-demand, self-service, scalability, pay-for-use) with those benefits associated with on-premises environments (governance, predictability, control) into a single offering. Oracle Cloud Infrastructure takes advantage of a high-scale, high-bandwidth network that connects cloud servers to high-performance local, block, and object storage to deliver a cloud platform that yields the highest performance for traditional and distributed applications, as well as highly available databases.

With the acquisitions of Dyn and Zenedge, Oracle Cloud Infrastructure extended its offering to include Dyn’s best-in-class DNS and email delivery solutions and Zenedge’s next-generation Web Application Firewall (WAF) and Distributed Denial of Service (DDoS) capabilities.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • +1.650.506.2904

Quentin Nolibois

  • +1.415.591.4097

Announcement: Venue Confirmed For Upcoming Brussels “Oracle Indexing Internals and Best Practices” Seminar

Richard Foote - Wed, 2018-07-18 02:55
I can finally confirm the venue for my upcoming “Oracle Indexing Internals and Best Practices” seminar in beautiful Brussels, Belgium running on 27-28 September 2018. The venue will be the Regus Brussels City Centre Training Rooms Facility, Avenue Louise / Louizalaan 65, Stephanie Square, 1050, Brussels. Note: This will be the last public seminar I’ll run […]
Categories: DBA Blogs

Control File issues on duplicating with non patched Oracle version.

Yann Neuhaus - Wed, 2018-07-18 02:34

Introduction :

RMAN has the ability to duplicate, or clone, a database from a backup or from an active database.
It is possible to create a duplicate database on a remote server with the same file structure,
or on a remote server with a different file structure or on the local server with a different file structure.
For some old and  non patched Oracle versions such as that earlier than 11.2.0.4 , the duplicate (from active or backup) can be a real
challenge even for those DBAs with years of experience,  due to different bugs encountered.

The scenario specified  below will focus on control file issues revealed by duplication from active database an Oracle 11.2.0.2 version EE.

<INFO>Make sure to use nohup command line-utility which allows to run command/process or shell script.

Demonstration :

Step1: Prepare your script:

vi script_duplicate.ksh

#!/bin/ksh
export ORACLE_HOME=$ORACLE_HOME
export PATH=$PATH1:$ORACLE_HOME/bin
rman target sys/pwd@TNS_NAME_TARGET auxiliary sys/pwd@TNS_NAME_AUXILIARY log=duplicate.log cmdfile=/home/oracle/rman_bkup.cmd

vi rman_bkup.cmd
run
{
allocate channel ch1 device type disk;
allocate channel ch2 device type disk;
allocate channel ch3 device type disk;
allocate auxiliary channel dh1 device type disk;
allocate auxiliary channel dh2 device type disk;
allocate auxiliary channel dh3 device type disk;
duplicate target database to <AUXILIARY_NAME> from active database nofilenamecheck;
release channel ch3;
release channel ch2;
release channel ch1;
}

and launch like that : nohup ./script_duplicate.ksh &

Step2: Check instance parameters.
Depending on the PSU level of your instance, even before starting the duplicate, can fail with this error.

RMAN-00571: ===================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
RMAN-00571: ===================================================
RMAN-03002: failure of Duplicate Db command at 11/02/2011 06:05:48
RMAN-04014: startup failed: ORA-00600: internal error code, arguments: [kck_rls_check must use (11,0,0,0,0) or lower], [kdt.c], [9576], [11.2.0.2.0], [], [], [], [], [], [], [], []
RMAN-04017: startup error description: ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-00600: internal error code, arguments: [kck_rls_check must use (11,0,0,0,0) or lower], [kdt.c], [9576], [11.2.0.2.0], [], [], [], [], [], [], [], []
RMAN-04017: startup error description: ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance

According with Oracle Support note : 1064264.1

1. Edit the pfile, add parameter:
_compression_compatibility= "11.2.0"

2. Restart the instance using the pfile
SQL> startup pfile='<fullpath name of pfile>'

3. Create the SPFILE again
SQL> create spfile from pfile;

4. Restart the instance with the SPFILE
SQLl> shutdown immediate;
SQL> startup

and relaunch the previous command (Step 1).

Step3 : Control file issue, trying to open the database.
After transferring the datafiles , your duplicate will crash with these errors , trying to open the database.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 15/07/2018 17:39:30
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script



SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-19838: Cannot use this control file to open database

Basically is because of a known bug (Bug 11063122 in 11gr2).
Controlfile created during the duplicate in 11gr2 will store redolog file locations as of primary.
We need to recreate control file changing the locations of redo logfiles and datafiles and open database with resetlogs.
In the controlfile recreation script the database name is the source <db_name> and the directory names for redo logs are still pointing to the source database .

The workaround is :

1. Backup as trace your control file (cloned DB)

sql> alter database backup controlfile to trace ;

2. Open the file  , and extract the section RESETLOGS, to modify like that :

CREATE CONTROLFILE REUSE DATABASE "<src_db_name>" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 11680
LOGFILE
  GROUP 9  '<path_of_the_cloned_DB>redo09.log'  SIZE 150M BLOCKSIZE 512,
  GROUP 10 '<path_of_the_cloned_DB>/redo10.log'  SIZE 150M BLOCKSIZE 512,
  GROUP 11 '<path_of_the_cloned_DB>/redo11.log'  SIZE 150M BLOCKSIZE 512,

DATAFILE
  '<path_of_the_cloned_DB>/system01.dbf',
  '<path_of_the_cloned_DB>/undotbs01.dbf',
  '<path_of_the_cloned_DB>/sysaux01.dbf',
  '<path_of_the_cloned_DB>/users01.dbf',
-------------more datafiles
CHARACTER SET EE8ISO8859P2;

Save as trace_control.ctl

3. SQL> alter system set db_name=<new db_name> scope=spfile;
4. SQL> startup nomount
5. SQL>@trace_control.ctl
      --control file created and multiplexed in all the destinations mentioned on your spfile 
6. SQL> alter database open resetlogs

<INFO>If your source db had activity during the duplicate process you should apply manually some required archivelogs.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 15/07/2018 19:21:30
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/u01/oradata/DBName/system01.dbf'

Search on source database , for those  archivelogs with sequence# greater or equal to 399747 and apply them manually on the target DB.

If somehow those are not available you need to take an incremental backup to roll forward your cloned database.

7. SQL> recover database using backup controlfile;

ORA-00279: change 47260162325 generated at  15/07/2018 19:27:40 needed for thread 1
ORA-00289: suggestion : <path>o1_mf_1_399747_%u_.arc
ORA-00280: change 47260162325 for thread 1 is in sequence #399747

Once the required archivelogs files have been applied , try again to open your database:

RMAN> alter database open resetlogs;

database opened

RMAN> exit

Conclusion :
If you’re the kind of Oracle administrator who has the power to approve or deny, you must know how dangerous it is to run your applications with  non patched Oracle databases.
Your data within your organization is better protected if your are taking advantage of patches issued by Oracle and running your production data against supported Oracle versions only.

 

Cet article Control File issues on duplicating with non patched Oracle version. est apparu en premier sur Blog dbi services.

Vibrant and Growing: The Current State of API Management

OTN TechBlog - Tue, 2018-07-17 23:00

"Vibrant and growing all the time!" That's how Andrew Bell, Oracle PaaS API Management Architect at Capgemini, describes the current state of API management. "APIs are the doors to organizations, the means by which organizations connect to one another, connect their processes to one another, and streamline those processes to meet customer needs. The API environment is growing rapidly as we speak," Bell says.

"API management today is quite crucial," says Bell's Capgemini colleague Sander Rensen, an Oracle PaaS lead and architect, "especially for clients who want to go on a journey of a digital transformation. For our clients, the ability to quickly find APIs and subscribe to them is a very crucial part of digital transformation.

"It's not just the public-facing view of APIs," observes Oracle ACE Phil Wilkins, a senior Capgemini consultant specializing in iPaaS. "People are realizing that APIs are an easier, simpler way to do internal decoupling. If I expose my back-end system in a particular way to another part of the organization — the same organization — I can then mask from you how I'm doing transformation or innovation or just trying to keep alive a legacy system while we try and improve our situation," Wilkins explains. "I think that was one of the original aspirations of WSDL and technologies like that, but we ended up getting too fine-grained and tying WSDLs to end products. Then the moment the product changed that WSDL changed and you broke the downstream connections."

Luis Weir, CTO of Capgemini's Oracle delivery unit and an Oracle Developer Champion and ACE Director, is just as enthusiastic about the state of API management, but see's a somewhat rocky road ahead for some organizations. "APIs are one thing, but the management of those APIs is something entirely different," Weir explains

"API management is something that we're doing quite heavily, but I don't think all organizations have actually realized the importance of the full lifecycle management of the APIs. Sometimes people think of API management as just an API gateway. That’s an important capability, but there is far more to it,"

Weir wonders if organizations understand what it means to manage an API throughout its entire lifecycle.

Bell, Rensen, Wilkins, and Weir are the authors of Implementing Oracle API Platform Cloud Service, now available from Packt Publishing, and as you'll hear in this podcast, they bring considerable insight and expertise to this discussion of what's happening in API management. The conversation goes beyond the current state of API management to delve into architectural implications, API design, and how working in SOA may have left you with some bad habits. Listen!

This program was recorded on June 27, 2018.

The Panelists Andrew Bell Andrew Bell
Oracle PaaS API Management Architect, Capgemini
Twitter  LinkedIn  Sander Rensen Sander Rensen
Oracle PaaS Lead and Architect, Capgemini
Twitter  LinkedIn  Luis Weir Luis Weir
CTO, Oracle DU, Capgemini
Oracle Developer Champion
Oracle ACE Director
Twitter LinkedIn Phil Wilkins
Senior Consultant specializing in iPaaS
Oracle ACE
Twitter LinkedIn  Additional Resources Coming Soon

How has your role as a developer, DBA, or Sysadmin changed? Our next program will focus on the evolution of IT roles and the trends and technologies that are driving the changes.

Oracle Critical Patch Update July 2018 Oracle PeopleSoft Analysis and Impact

As with almost all previous Oracle E-Business Suite Critical Patch Updates (CPU), the July 2018 quarterly patch is significant and high-risk for PeopleSoft applications.  Despite the publicity, marketing, or naming of specific vulnerabilities, this quarter is no different than previous quarters in terms of risk and prioritization within your organization.

For this quarter, there are 15 security vulnerabilities patches in PeopleSoft applications and PeopleTools --

10 - PeopleTools

2 - PeopleSoft Financials

2 - PeopleSoft HCM

1 - PeopleSoft Campus Solutions

11 of the 15 security vulnerabilities are remotely exploitable without authentication, therefore, an attacker can exploit the PeopleSoft without any credentials.  For this quarter, there are 7 cross-site scripting vulnerabilities, 3 vulnerabilities in third-party libraries used in PeopleSoft, and 5 other types of vulnerabilities.

10 cross-site scripting (XSS) vulnerabilities and 4 other types of vulnerabilities fixed.  Most important is that 13 of the 14 vulnerabilities are remotely exploitable without authentication.

For PeopleTools, only 8.55 and 8.56 are supported.  Previous versions of PeopleTools must be upgraded in order to apply the security patches.

Tuxedo

Another vulnerability for Tuxedo JOLT (CVE-2018-3007) is fixed in this CPU, therefore, Tuxedo must also be patched.  Configuration changes must be made to the Tuxedo server in order to limit connections to both JSH and WSH in order to reduce the risk of security vulnerabilities.

WebLogic

A number of vulnerabilities in WebLogic are fixed in this CPU including a vulnerability accessible via the T3 protocol.  In addition to applying the appropriate WebLogic security patch, the WebLogic should be configured to only allow access to the HTTPS protocol.

Oracle Database

For the July 2018 CPU, only 11.2.0.4 and 12.1.0.2 are supported for security patches.  For the database, there is a OJVM security patch, so either the combo patch must be applied or a separate OJVM patch must be applied to correct the vulnerability in the Java Virtual Machine (JVM) in the database which is used by PeopleSoft.

July 2018 Recommendations

As with almost all Critical Patch Updates, the security vulnerabilities fixes are significant and high-risk.  Corrective action should be taken immediately for all PeopleSoft environments. The most at risk implementations are Internet facing environments and Integrigy rates this CPU as high risk due to the large number of cross-site scripting (XSS) vulnerabilities that can be remotely exploited without authentication.   These implementations should apply the CPU as soon as possible or use a virtual patching solution such as AppDefend.

Most PeopleSoft environments do not apply the CPU security patch in a timely manner and are vulnerable to full compromise of the application through exploitation of multiple vulnerabilities. If the CPU cannot be applied quickly, the only effective alternative is the use of Integrigy's AppDefend, an application firewall for the Oracle PeopleSoft.  AppDefend provides virtual patching and can effectively replace patching of PeopleSoft web security vulnerabilities.

CVEs referenced: CVE-2017-5645, CVE-2018-1275, CVE-2018-2990, CVE-2018-2977, CVE-2018-0739, CVE-2018-2951, CVE-2018-3068, CVE-2018-2929, CVE-2018-2919, CVE-2018-2985, CVE-2018-2986, CVE-2018-3016, CVE-2018-3072, CVE-2018-2970, CVE-2018-3076

Oracle PeopleSoft, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Oracle Critical Patch Update July 2018 Oracle E-Business Suite Analysis and Impact

As with almost all previous Oracle E-Business Suite Critical Patch Updates (CPU), the July 2018 quarterly patch is significant and high-risk. 51 of the past 55 quarterly patches are significant and high-risk as they fix one or more SQL injection vulnerabilities or other damaging security vulnerabilities in the web application of Oracle E-Business Suite. Despite the publicity, marketing, or naming of specific vulnerabilities, this quarter is no different than previous quarters in terms of risk and prioritization within your organization.

For this quarter, there are 10 cross-site scripting (XSS) vulnerabilities and 4 other types of vulnerabilities fixed.  Most important is that 13 of the 14 vulnerabilities are remotely exploitable without authentication.

Externally facing Oracle E-Business Suite environments (DMZ) running iStore should take immediate action to mitigate the three vulnerabilities impacting iStore.  These web pages are allowed by the URL Firewall if the iStore module is enabled.  Two of the three are cross-site scripting (XSS) vulnerabilities, which requires interaction with the end-user such as clicking a link but allows for the attacker to hijack the end-users session.

July 2018 Recommendations

As with almost all Critical Patch Updates, the security vulnerabilities fixes are significant and high-risk.  Corrective action should be taken immediately for all Oracle E-Business Suite environments. The most at risk implementations are those running Internet facing self-service modules (iStore for this CPU) and Integrigy rates this CPU as high risk due to the large number of cross-site scripting (XSS) vulnerabilities that can be remotely exploited without authentication.   These implementations should (1) apply the CPU as soon as possible or use a virtual patching solution such as AppDefend and (2) ensure the DMZ is properly configured according to the EBS specific instructions and the EBS URL Firewall is enabled and optimized.

Most Oracle E-Business Suite environments do not apply the CPU security patch in a timely manner and are vulnerable to full compromise of the application through exploitation of multiple vulnerabilities. If the CPU cannot be applied quickly, the only effective alternative is the use of Integrigy's AppDefend, an application firewall for the Oracle E-Business Suite.  AppDefend provides virtual patching and can effectively replace patching of EBS web security vulnerabilities.

Oracle E-Business Suite 12.1 and 12.2 Patching

For 12.2, there are no significant changes from previous CPUs and 12.2.3 along with R12.AD.C.DELTA.10 and R12.TXK.C.DELTA.10 roll-up patches is the minimum baseline.  In addition to the cumulative EBS security patch, the July 2018 WebLogic 10.3.6 PSU must be applied (PSU 10.3.6.0.180717 - Patch 27919965).

For 12.1, there are no significant changes from the previous CPUs and the major requirement is the Oracle Application Server must be upgraded to 10.1.3.5.  No security patches are required for the Oracle Application Server.

Only 12.1.0.2 and 11.2.0.4 versions of the Oracle Database are supported and the database must be upgraded in order to apply this quarter's database security patch if it has not already been upgraded.  For the database there is a OJVM security patch, so either the combo patch must be applied or a separate OJVM patch must be applied to correct the vulnerability in the Java Virtual Machine (JVM) in the database which is used by Oracle E-Business Suite.

Oracle E-Business Suite 12.0

CPU support for Oracle E-Business Suite 12.0 ended January 2015 and there are no security fixes for this release.  Integrigy’s initial analysis of the CPU shows all 14 vulnerabilities are exploitable in 12.0. In order to protect your application environment, the Integrigy AppDefend application firewall for Oracle E-Business Suite provides virtual patching for all these exploitable web security vulnerabilities.

Oracle E-Business Suite 11i

As of April 2016, the 11i CPU patches are only available for Oracle customers with Tier 1 Support. Integrigy’s analysis of the July 2018 CPU shows at least 6 of the 14 vulnerabilities are also exploitable in 11i.  11i environments without Tier 1 Support should implement a web application firewall and virtual patching for Oracle E-Business Suite in order to remediate the large number of unpatched security vulnerabilities.  As of July 2018, an unsupported Oracle E-Business Suite 11i environment will have approximately 200 unpatched vulnerabilities – a number of which are high-risk SQL injection security bugs.

11i Tier 1 Support has been extended through December 2018, thus October 2018 will be the final CPU for Oracle E-Business Suite 11i.  At this time it is unclear if Oracle will again extend support for another year, therefore, organizations should plan that support will not be extended and being to take corrective action to ensure their environments are properly secured.

CVEs Referenced: CVE-2018-2993, CVE-2018-3017, CVE-2018-2995, CVE-2018-3018, CVE-2018-3008, CVE-2018-2953, CVE-2018-2997, CVE-2018-2991, CVE-2018-3012, CVE-2018-2996, CVE-2018-2954, CVE-2018-2988, CVE-2018-2934, CVE-2018-2994

Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Pages

Subscribe to Oracle FAQ aggregator