Feed aggregator

hierarchical query with date consideration

Tom Kyte - Thu, 2018-07-19 20:06
How to include an effectivity date range check in hierarchical query? Such that a loop may exist if the date range is excluded, but considering the range a loop would not. <code>with output_tab as (select 'P1' as output_id, 100 as output_key, to_...
Categories: DBA Blogs

Oracle Security Training by Pete Finnigan in 2018

Pete Finnigan - Thu, 2018-07-19 19:46
Are you worried about the data in your databases being stolen? GDPR has just become law across the EU and the UK and affects business in other countries that process EU citizens data. Maybe you store and process credit card....[Read More]

Posted by Pete On 19/07/18 At 02:04 PM

Categories: Security Blogs

Slides from SV JUG Jul-18th Meetup

Kuassi Mensah - Thu, 2018-07-19 14:28
SV JUGers, it was a great Meetup.
The slides are here.

Enjoy!

Oracle VBCS - Pay As You Go Cloud Model Experience Explained

Andrejus Baranovski - Thu, 2018-07-19 14:03
If you are considering starting using VBCS cloud service from Oracle, may be this post will be useful. I will share my experience with pay as you go model.

Two payment models are available:

1. Pay As You Go - good when accessing VBCS time to time. Can be terminated at any time
2. Monthly Flex - good when need to run VBCS 24/7. Requires commitment, can't be terminated at any time

When you create Oracle Cloud account, initially you will get 30 days free trial period. At the end of that period (or earlier), you can upgrade to billable plan. To upgrade, go to account management and choose to upgrade promotional offer - you will be given choice to go with Pay As You Go or Monthly Flex:


As soon as you upgrade to Pay As You Go, you will start seeing monthly usage amount in the dashboard. Also it shows hourly usage of VBCS instance, for the one you will be billed:


Click on monthly usage amount, you will see detail view per each service billing. When VBCS instance is stopped (in case of Pay As You Go) - you will be billed only for hardware storage (Compute Classic) - this is relatively very small amount:


There are two options, how you can create VBCS instance - either autonomous VBCS or customer managed VBCS. To be able to stop/start VBCS instance and avoid billing when instance is not used (in case of Pay As You Go) - make sure to go with customer managed VBCS. In this example, VBCS instance was used only for 1 hour and then it was stopped, it can be started again at anytime:


To manage VBCS instance, you would need to navigate to Oracle Cloud Stack UI. From here you can start stop both DB and VBCS in single action. It is not enough to stop VBCS, make sure to stop DB too, if you are not using it:

Playing With Service Relocation 12c

Michael Dinh - Thu, 2018-07-19 09:14
With 12c, use verbose to display services running.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl -V
srvctl version: 12.1.0.2.0

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

There is option to provide comma delimited list of services to check the status.
Unfortunately, option is not available for relocation which I failed to understand.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p11,p12,p13,p14"
Service p11 is running on instance(s) hawk1
Service p12 is running on instance(s) hawk1
Service p13 is running on instance(s) hawk1
Service p14 is running on instance(s) hawk1

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p21,p22,p23,p24,p25"
Service p21 is running on instance(s) hawk2
Service p22 is running on instance(s) hawk2
Service p23 is running on instance(s) hawk2
Service p24 is running on instance(s) hawk2
Service p25 is running on instance(s) hawk2

Puzzled that status for services is able to use delimited list where as relocation is not.

I have blogged about new features for service failover: 12.1 Improved Service Failover

Another test shows that it’s working as it should be.

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 ~]$ srvctl stop instance -d hawk -instance hawk1 -failover

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$


[root@racnode-dc1-1 ~]# crsctl stop crs
[root@racnode-dc1-1 ~]# crsctl start crs


[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

However, the requirement is to relocate services versus failover.

Here are scripts and demo for that.

Script will only work for 2-nodes RAC and service is running on 1 instance only.

[oracle@racnode-dc1-1 ~]$ srvctl config service -d hawk |egrep 'Service name|instances'
Service name: p11
Preferred instances: hawk1
Available instances: hawk2
Service name: p12
Preferred instances: hawk1
Available instances: hawk2
Service name: p13
Preferred instances: hawk1
Available instances: hawk2
Service name: p14
Preferred instances: hawk1
Available instances: hawk2
Service name: p21
Preferred instances: hawk2
Available instances: hawk1
Service name: p22
Preferred instances: hawk2
Available instances: hawk1
Service name: p23
Preferred instances: hawk2
Available instances: hawk1
Service name: p24
Preferred instances: hawk2
Available instances: hawk1
Service name: p25
Preferred instances: hawk2
Available instances: hawk1
[oracle@racnode-dc1-1 ~]$
DEMO:
[oracle@racnode-dc1-1 rac_relocate]$ ls *relocate*.sh
relocate_service.sh  validate_relocate_service.sh

[oracle@racnode-dc1-1 rac_relocate]$ ls *restore*.sh
restore_service_instance1.sh  restore_service_instance2.sh
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ SAVE SERVICES LOCATION AND PREVENT ACCIDENTAL OVERWRITE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org

[oracle@racnode-dc1-1 rac_relocate]$ chmod 400 /tmp/service.org; ll /tmp/service.org; cat /tmp/service.org
-r-------- 1 oracle oinstall 222 Jul 18 14:54 /tmp/service.org
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org
-bash: /tmp/service.org: Permission denied
[oracle@racnode-dc1-1 rac_relocate]$

	
========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 1 TO 2

Validate is similar to RMAN validate.
No relocation is performed and only syntax is provided for verification.
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh
./validate_relocate_service.sh: line 4: 1: ---> USAGE: ./validate_relocate_service.sh -db_unique_name -oldinst# -newinst#

[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh hawk 1 2
+ OUTF=/tmp/service_1.conf
+ srvctl status instance -d hawk -instance hawk1 -v
+ ls -l /tmp/service_1.conf
-rw-r--r-- 1 oracle oinstall 109 Jul 18 14:59 /tmp/service_1.conf
+ cat /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ set +x

**************************************
***** SERVICES THAT WILL BE RELOCATED:
**************************************
srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2


[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 1 2
-rw-r--r-- 1 oracle oinstall 109 Jul 18 15:00 /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 2 TO 1
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 2 1
-rw-r--r-- 1 oracle oinstall 129 Jul 18 15:02 /tmp/service_2.conf
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p21 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RESTORE SERVICES FOR INSTANCE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh
./restore_service_instance2.sh: line 4: 1: ---> USAGE: ./restore_service_instance2.sh -db_unique_name

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh hawk
+ srvctl relocate service -d hawk -service p21 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk1 -newinst hawk2
+ set +x

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 rac_relocate]$
CODE:


========================================================================
+++++++ validate_relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
set -x
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
set +x
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
echo
echo "**************************************"
echo "***** SERVICES THAT WILL BE RELOCATED:"
echo "**************************************"
for s in ${svc}
do
echo "srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}"
done
exit

========================================================================
+++++++ relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}
set +x
done
set -x
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v
srvctl status instance -d ${DB} -instance ${DB}${NEW} -v
set +x
exit

========================================================================
+++++++ restore_service_instance1.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`head -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}2 -newinst ${DB}1
set +x
done
exit

========================================================================
+++++++ restore_service_instance2.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`tail -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}1 -newinst ${DB}2
set +x
done
exit

How to install Docker Enterprise Edition on CentOS 7 ?

Yann Neuhaus - Thu, 2018-07-19 07:54

In this blog we are going to see how to install Docker EE trial edition on CentOS 7 hosts. As you may know or not, Docker has two editions: Docker Community Edition  (CE) and Docker Enterprise Edition (EE). To make it simple, let’s say that Docker EE is designed for production environment. More infos here.

 

This will be our architecture:

  • 1 manager node
    • hostname: docker-ee-manager1
  • 1 worker node + Docker Trust Registry (DTR) node
    • hostname: docker-ee-worker1

Both nodes should be in the same network range.

We will assume that CentOS 7 is already installed on all hosts:

[root@docker-ee-manager1 ~] cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

[root@docker-ee-worker1 ~]$ cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

 Create docker user and group
[root@docker-ee-manager1 ~]$ groupadd docker
[root@docker-ee-manager1 ~]$ useradd -g docker docker
[root@docker-ee-manager1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-manager1 ~]$ su - docker

Do the same on worker

[root@docker-ee-worker1 ~]$ groupadd docker
[root@docker-ee-worker1 ~]$ useradd -g docker docker
[root@docker-ee-worker1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-worker1 ~]$ su - docker

 

 

Get the Docker URL for installating Docker EE

Then you need to go to this link, make sure that you already have a Docker account, it’s free you can make one very quickly.

 

 

Fill the formula and you will have access to this :

 

dockerstore

storebits

 

Copy the url and save the license key in a safe location, you will need it later.

 

1. Configure Docker URL
[docker@docker-ee-manager1 ~]$ export DOCKERURL="<YOUR_LINK>"
[docker@docker-ee-manager1 ~]$ sudo -E sh -c 'echo "$DOCKERURL/centos" > /etc/yum/vars/dockerurl'

 

2. We install required packages
[docker@docker-ee-manager1 ~]$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

 

3. Add the Docker-EE repository
[docker@docker-ee-manager1 ~]$ sudo -E yum-config-manager \
--add-repo \
"$DOCKERURL/centos/docker-ee.repo"

 

4. Install docker-ee package
[docker@docker-ee-manager1 ~]$ sudo yum -y install docker-ee
[docker@docker-ee-manager1 ~]$ sudo systemctl enable docker.service
[docker@docker-ee-manager1 ~]$ sudo systemctl start docker.service

 

Repeat step 1 to 4 for worker1 node

 

Install UCP on manager

Simple command, just run this on your manage

 

[docker@docker-ee-manager1 ~]$ docker container run --rm -it --name ucp   -v /var/run/docker.sock:/var/run/docker.sock   docker/ucp:3.0.2 install   --host-address <YOUR_IP>   --interactive
INFO[0000] Your engine version 17.06.2-ee-15, build 64ddfa6 (3.10.0-514.el7.x86_64) is compatible with UCP 3.0.2 (736cf3c)
Admin Username: admin
Admin Password:
Confirm Admin Password:
WARN[0014] None of the hostnames we'll be using in the UCP certificates [docker-ee-manager1 127.0.0.1 172.17.0.1 <YOUR_IP>] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0000] Found existing UCP config com.docker.ucp.config-2
Do you want to proceed with the install with config com.docker.ucp.config-2? (y/n): y
y
INFO[0032] Installing UCP with host address 10.29.14.101 - If this is incorrect, please specify an alternative address with the '--host-address' flag
INFO[0032] Deploying UCP Service... (waiting for all 2 nodes to complete)
INFO[0083] Installation completed on docker-ee-manager1 (node uvzvuefehznf22k4wa5zg9cy1)
INFO[0083] Installation completed on docker-ee-worker1 (node z7gq7z3336jnwcyojyqq1h3wa)
INFO[0083] UCP Instance ID: x0fg0phnkgzm5730thoncucn2
INFO[0083] UCP Server SSL: SHA-256 Fingerprint=E6:2F:38:69:5D:26:A8:06:D3:8B:11:69:D9:DC:3A:77:CE:16:EA:23:9C:D0:D8:8F:34:D6:97:9D:4B:D2:E2:D2
INFO[0083] Login to UCP at https://<YOUR_IP>1:443
INFO[0083] Username: admin
INFO[0083] Password: (your admin password)

Ignore if there is a insecure message and accept exception. We can see the UCP admin interface. Enter your credentials and upload your license key ucplogin     ucpinterface  

Adding a worker node

  clickonnodes   And click on Add Node Addnode Then tell UCP that you want to deploy a new worker node and copy the command displayed. chooseWorker     Connect to the worker and run this command to join the worker to the cluster

[docker@docker-ee-worker1 ~]$ docker swarm join --token SWMTKN-1-4kt4gyk00n69tiywlzhst8dwsgo4oblylnsl1aww2048isi44u-7j9hmcrsn3lr048yu30xlnsv7 <IP_OF_MANAGER>:2377
This node joined a swarm as a worker.

 

Now, we have two nodes: one manager and one worker

 

Now2nodes

 

Install Docker Trusted Registry

 

The docker EE includes a DTR which is a secure registry where you can store your docker images.  The DTR will be installed on the worker node, it’s not recommended to install it on a manager node.

To install it, you just need to run this command:

[docker@docker-ee-worker1 ~]$ docker run -it --rm docker/dtr install --ucp-node docker-ee-worker1 --ucp-url https://<IP_OF_MANAGER> --ucp-username admin --ucp-password <YOUR_PASSWORD> --ucp-ca "-----BEGIN CERTIFICATE-----
MIIBggIUJ+Y+MFXH1XcyJnCU4ACq26v5ZJswCgYIKoZIzj0EAwIw
HTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMB4XDTE4MDcxOTA4MjEwMFoX
DTIzMDcxODA4MjEwMFowHTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMFkw
EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEDJxHOIhHoV4NBZGnEQClFShjQfpoL5mQ
LH7E6x6GL4AexYtdWgGIcOlV2NXQpdadBK9cZG2z6r7+zwCj7EP/iqNFMEMwDgYD
VR0P7ojp1CIMAoGCCqGSM49BAMCA0gAMEUCIQDqbBiCqXgFdtIb6uP9
EdDTI1YGWn97AFPU+YJ9s1/CSAIgBsqIn1v7BVNjJ3AeUQfo1d8Kfc//ZwHYr4XW
uWIHmkM=
-----END CERTIFICATE-----"

You can find the certificate here: https://<IP_OF_MANAGER>/manage/settings/certs

findCertificate

Then go to the DTR URL which is https://<IP_OF_WORKER>  and enter your credentials

 

DTRLogin

 

 

 

Here we are:

 

DTRUI

Congratulations, you just have installed Docker EE. Hope this helps  :-)

 

 

Cet article How to install Docker Enterprise Edition on CentOS 7 ? est apparu en premier sur Blog dbi services.

Google Cloud Spanner – inserting data

Yann Neuhaus - Thu, 2018-07-19 04:17

In a previous post I’ve created a Google Cloud Spanner database and inserted a few rows from the GUI. This is definitely not a solution fo many rows and here is a post about using the command line.

If I start the Google Shell from the icon on the Spanner page for my project, everything is set. But if I run it from elsewhere, using the https://console.cloud.google.com/cloudshell as I did in A free persistent Google Cloud service with Oracle XE I have to set the project:

franck_pachot@cloudshell:~$ gcloud config set project superb-avatar-210409
Updated property [core/project].
franck_pachot@superb-avatar-210409:~$

Instance

I create my Spanner instance with 3 nodes across the world:
¨
franck_pachot@superb-avatar-210409:~$ time gcloud spanner instances create franck --config nam-eur-asia1 --nodes=3 --description Franck
Creating instance...done.
 
real 0m3.940s
user 0m0.344s
sys 0m0.092s

Database

and Spanner database – created in 6 seconds:

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases create test --instance=franck
Creating database...done.
&nbssp;
real 0m6.832s
user 0m0.320s
sys 0m0.128s

Table

The DDL for table creation can also be run from there:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO1 ( ID1 int64, TEXT string(max) ) primary key (ID1)'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty

I’m now ready to insert one million rows. Here is my table:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl describe test --instance=franck
--- |-
CREATE TABLE DEMO1 (
ID1 INT64,
TEXT STRING(MAX),
) PRIMARY KEY(ID1)

Insert

The gcloud command line has a limited insert possibility:

franck_pachot@superb-avatar-210409:~$ time for i in $(seq 1 1000000) ; do gcloud beta spanner rows insert --table=DEMO1 --database=test --instance=franck --data=ID1=${i},TEXT=XXX${i} ; done
commitTimestamp: '2018-07-18T11:09:45.065684Z'
commitTimestamp: '2018-07-18T11:09:50.433133Z'
commitTimestamp: '2018-07-18T11:09:55.752857Z'
commitTimestamp: '2018-07-18T11:10:01.044531Z'
commitTimestamp: '2018-07-18T11:10:06.285764Z'
commitTimestamp: '2018-07-18T11:10:11.106936Z'
^C

Ok, let’s stop there. Calling a service for each row is not efficient with a latency of 5 seconds.

API

I’ll use the API from Python. Basically, a connection is a Spanner Client:

franck_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> spanner_client = spanner.Client()
>>> instance = spanner_client.instance('franck')
>>> database = instance.database('test')
>>>

Batch Insert

With this I can send a batch of rows to insert. Here is the full Python script I used to insert one million, by batch of 1000 rows:

from google.cloud import spanner
spanner_client = spanner.Client()
instance = spanner_client.instance('franck')
database = instance.database('test')
for j in range(1000):
records=[] for i in range(1000):
records.append((1+j*1000+i,u'XXX'+str(i)))
with database.batch() as batch:
batch.insert(table='DEMO1',columns=('ID1', 'TEXT',),values=records)

This takes 2 minutes:

franck_pachot@superb-avatar-210409:~$ time python3 test.py
 
real 2m52.707s
user 0m21.776s
sys 0m0.668s
franck_pachot@superb-avatar-210409:~$

If you remember my list of blogs on Variations on 1M rows insert that’s not so fast. But remember that rows are distributed across 3 nodes in 3 continents but here inserting with constantly increasing value have all batched rows going to the same node. The PRIMARY KEY in Google Spanner is not only there to declare a constraint but also determines the organization of data.

Query

The select can also be run from there from a read-only transaction called ‘Snapshot’ because it is doing MVCC consistent reads:

frank_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> with spanner.Client().instance('franck').database('test').snapshot() as snapshot:
... results = snapshot.execute_sql('SELECT COUNT(*) FROM DEMO1')
... for row in results:
... print(row)
...
[1000000]

The advantage of the read-only transaction is that it can do consistent reads without locking. The queries executed in a read-write transaction have to acquire some locks in order to guarantee consistency when reading across multiple nodes.

Interleave

So, you can look at the PRIMARY KEY as a partition by range, and we have also reference partitioning with INTERLEAVE IN PARENT. This reminds me of the Oracle CLUSTER segment that is so rarely used because storing the tables separately is finally the better compromise on performance and flexibility for a multi-purpose database.

Here is my creation of DEMO2 where ID1 is a foreign key referencing DEMO1

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO2 ( ID1 int64, ID2 int64, TEXT string(max) ) primary key (ID1,ID2), interleave in parent DEMO1 on delete cascade'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty
 
real 0m24.418s
user 0m0.356s
sys 0m0.088s

I’m now inserting 5 detail rows per each parent row:

from google.cloud import spanner
database = spanner.Client().instance('franck').database('test')
for j in range(1000):
records=[] for i in range(1000):
for k in range(5):
records.append((1+j*1000+i,k,u'XXX'+str(i)+' '+str(k)))
with database.batch() as batch:
batch.insert(table='DEMO2',columns=('ID1','ID2','TEXT'),values=records)

This ran in 6 minutes.

Join (Cross Apply)

Here is the execution plan for

SELECT * FROM DEMO1 join DEMO2 using(ID1) where DEMO2.TEXT=DEMO1.TEXT

where I join the two tables and apply a filter on the join:
CaptureSpannerCrossApply

Thanks to the INTERLEAVE the join is running locally. Each row from DEMO1 (the Input of the Cross Apply) is joined with DEMO2 (the Map of Cross Apply) locally. Only the result is serialized. On this small number of rows we do not see the benefit from having the rows in multiple nodes. There are only 2 nodes with rows here (2 local executions) and probably one node contains most of the rows. The average time per node is 10.72 seconds and the elapsed time is 20.9 seconds, so I guess that one node ran un 20.9 seconds and the other in 1.35 only.

The same without the tables interleaved (here as DEMO3) is faster to insert but the join will be more complex where DEMO1 must be distributed to all nodes.
CaptureSpannerDistributedCrossApply
Without interleave, the input table of the local Cross Apply is a Batch Scan, which is actually like a temporary table distributed to all nodes (seems to have 51 chunks here), created by the ‘Create Batch’. This is called Distributed Cross Applied.

So what?

Google Spanner has only some aspects of SQL and Relational databases. But it is still, like the NoSQL databases, a database where the data model is focused at one use case only because the data model and the data organization have to be designed for specific data access.

 

Cet article Google Cloud Spanner – inserting data est apparu en premier sur Blog dbi services.

VirtualBox 5.2.16

Tim Hall - Thu, 2018-07-19 02:57

Hot on the heels of 5.2.14 two weeks ago, we now have VirtualBox 5.2.16.

The downloads and changelog are in the usual places.

I’ve done the install on my Windows 10 PC at work and  Windows 10 laptop at home and in both cases it worked fine. I can’t see any problems using it with Vagrant 2.1.2 either.

I would have a go at installing on by MacBook Pro, only the latest macOS updates have turned it into a brick again. Nothing changes…

Cheers

Tim…

VirtualBox 5.2.16 was first posted on July 19, 2018 at 8:57 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

SYS_CONTEXT('userenv','module') behaviour in Database Vault

Tom Kyte - Thu, 2018-07-19 01:46
Hello Tom, I have implemented DB Vault on a 12.2.0.1.0 Oracle database. I created a Vault policy to block adhoc access to application schema using DB tools like Toad etc. The policy should allow only application connection to DB from application s...
Categories: DBA Blogs

Oracle Load Balancer Classic configuration with Terraform

OTN TechBlog - Thu, 2018-07-19 01:30

(Originally published on Medium)

This article provides an introduction to using the Load Balancer resources to provision and configure an Oracle Cloud Infrastructure Load Balancer Classic instance using Terraform

When using the Load Balancer Classic resources with the opc Terraform Provider the  lbaas_endpoint  attribute must be set in the provider configuration.

provider "opc" { version = "~> 1.2" user = "${var.user}" password = "${var.password}" identity_domain = "${var.compute_service_id}" endpoint = "${var.compute_endpoint}" lbaas_endpoint = "https://lbaas-1111111.balancer.oraclecloud.com" }

First we create the main Load Balancer instance resource. The Server Pool, Listener and Policy resources will be created as child resources associated to this instance.

resource "opc_lbaas_load_balancer" "lb1" { name = "examplelb1" region = "uscom-central-1" description = "My Example Load Balancer" scheme = "INTERNET_FACING" permitted_methods = ["GET", "HEAD", "POST"] ip_network = "/Compute-${var.domain}/${var.user}/ipnet1" }

To define the set of servers the load balancer will be directing traffic to we create a Server Pool, sometimes referred to as an origin server pool. Each server is defined by the combination of the target IP address, or hostname, and port. For the brevity of this example we’ll assume we already have a couple instances on an existing IP Network with a web service running on port  8080 

resource "opc_lbaas_server_pool" "serverpool1" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "serverpool1" servers = ["192.168.1.2:8080", "192.168.1.3:8080"] vnic_set = "/Compute-${var.domain}/${var.user}/vnicset1" }

The Listener resource defines what incoming traffic the Load Balancer will direct to a specific server pool. Multiple Server Pools and Listeners can be defined for a single Load Balancer instance. For now we’ll assume all the traffic is HTTP, both to the load balancer and between the load balancer and the server pool. We’ll look at securing traffic with HTTPS later. In this example the load balancer is managing inbound requests for a site  http://mywebapp.example.com  and directing them to the server pool we defined above.

resource "opc_lbaas_listener" "listener1" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "http-listener" balancer_protocol = "HTTP" port = 80 virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", ] }

Policies are used to define how the Listener processes the incoming traffic. In the Listener definition we are referencing a Load Balancing Mechanism Policy to set how the load balancer allocates the traffic across the available servers in the server pool. Additional policy type could also be defined to control session affinity of

resource "opc_lbaas_policy" "load_balancing_mechanism_policy" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "roundrobin" load_balancing_mechanism_policy { load_balancing_mechanism = "round_robin" } }

With that, our first basic Load Balancer configuration is complete. Well almost. The last step is to configure the DNS CNAME record to point the source domain name (e.g. mywebapp.example.com ) to the canonical host name of load balancer instance. The exact steps to do this will be dependent on your DNS provider. To get the  canonical_host_name add the following output. output "canonical_host_name" { value = "${opc_lbaas_load_balancer.lb1.canonical_host_name}" }

Helpful Hint: if you are just creating the load balancer for testing and you don’t have access to a DNS name you can redirect, a workaround is to set the  virtual host  in the listener configuration to the load balancers canonical host name, you can then use the canonical host name directly for the inbound service URL, e.g.

resource "opc_lbaas_listener" "listener1" { ... virtual_hosts = [ "${opc_lbaas_load_balancer.lb1.canonical_host_name}" ] ... } Configuring the Load Balancer for HTTPS

There are two separate aspects to configuring the Load Balancer for HTTPS traffic, the first is to enable inbound HTTPS requests to the Load Balancer, often referred to as SSL or TLS termination or offloading. The second is the use of HTTPS for traffic between the Load Balancer and the servers in the origin server pool.

HTTPS SSL/TLS Termination

To configure the Load Balancer listener to accept inbound HTTPS requests for encrypted traffic between the client and the Load Balancer, create a Server Certificate providing the PEM encoded certificate and private key, and the concatenated set of PEM encoded certificates for the CA certification chain.

resource "opc_lbaas_certificate" "cert1" { name = "server-cert" type = "SERVER" private_key = "${var.private_key_pem}" certificate_body = "${var.cert_pem}" certificate_chain = "${var.ca_cert_pem}" }

Now update the existing, or create a new listener for HTTPS

resource "opc_lbaas_listener" "listener2" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "https-listener" balancer_protocol = "HTTPS" port = 443 certificates = ["${opc_lbaas_certificate.cert1.uri}"] virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", ] }

Note that the server pool protocol is still HTTP, in this configuration traffic is only encrypted between the client and the load balancer.

HTTP to HTTPS redirect

A common pattern required for many web applications is to ensure that any initial incoming requests over HTTP are redirected to HTTPS for secure site communication. To do this we can we can update the original HTTP listeners we created above with a new redirect policy

resource "opc_lbaas_policy" "redirect_policy" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "example_redirect_policy" redirect_policy { redirect_uri = "https://${var.dns_name}" response_code = 301 } } resource "opc_lbaas_listener" "listener1" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "http-listener" balancer_protocol = "HTTP" port = 80 virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.redirect_policy.uri}", ] } HTTPS between Load Balancer and Server Pool

HTTPS between the Load Balancer and Server Pool should be used if the server pool is accessed over the Public Internet, and can also be used for extra security when accessing servers within the Oracle Cloud Infrastructure over the private IP Network.

This configuration assumes the backend servers are already configured to server their content over HTTPS.

To configure the Load Balancer to communicate securely with the backend servers create a Trusted Certificate, providing the PEM encoded Certificate and CA authority certificate chain for the backend servers.

resource "opc_lbaas_certificate" "cert2" { name = "trusted-cert" type = "TRUSTED" certificate_body = "${var.cert_pem}" certificate_chain = "${var.ca_cert_pem}" }

Next create a Trusted Certificate Policy referencing the Trusted Certificate

resource "opc_lbaas_policy" "trusted_certificate_policy" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "example_trusted_certificate_policy" trusted_certificate_policy { trusted_certificate = "${opc_lbaas_certificate.cert2.uri}" } }

And finally update the listeners server pool configuration to HTTPS, adding the trusted certificate policy

resource "opc_lbaas_listener" "listener2" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "https-listener" balancer_protocol = "HTTPS" port = 443 certificates = ["${opc_lbaas_certificate.cert1.uri}"] virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTPS" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", "${opc_lbaas_policy.trusted_certificate_policy.uri} ] } More Information

Speaking At DOAG 2018 Conference And IT Tage 2018

Randolf Geist - Wed, 2018-07-18 15:29
I will be speaking at the yearly DOAG conference in December as well as at the IT Tage in November. My talk will be "Oracle Optimizer System Statistics Update 2018" where I summarize the history and current state of affairs regarding System Statistics and I/O calibration in recent Oracle versions like 12c and 18c.

A Quick Look At What's New In Oracle JET v5.1.0

OTN TechBlog - Wed, 2018-07-18 12:11

On June 18th, the v5.1.0 release of Oracle JET was made available. It was the 25th consecutive on-schedule release for Oracle JET. Details on the release schedule are provided here in the FAQ.

As indicated by the release number, v5.1.0 is a minor release, aimed at tweaking and consolidating features throughout the toolkit. As in other recent releases, new features have been added to support development of composite components, following the Composite Component Architecture (CCA). For details, see the entry on the new Template Slots in Duncan Mills's blog. Also, take note of the new design time metadata, as described in the release notes

Aside from the work done in the CCA area, the key new features and enhancements to be aware of in the release are listed below, sorted alphabetically:

Component Enhancement Description oj-chart New "data" attribute. Introduces new attributes, slots, and custom elements. oj-film-strip New "looping" attribute. Specifies filmstrip navigation behavior, bounded ("off) or looping ("page"). oj-form-layout Enhanced content flexibility. Removes restrictions on the types of children allowed in the "oj-form-layout" component. oj-gantt New "dnd" attribute and "ojMove" event.  Provides new support for moving tasks via drag and drop. oj-label-value New component. Provides enhanced layout flexibility for the "oj-form-layout" component. oj-list-view Enhanced "itemTemplate" slot. Supports including the <LI> element in the template. oj-swipe-actions New component. Provides a declarative way to add swipe-to-reveal functionality to items in the "oj-list-view" component.

For all the details on the items above, see the release notes.

Note: Be aware that in Oracle JET 7.0.0, support for Yeoman and Grunt will be removed from generator-oraclejet and ojet-cli. As a consequence, the ojet-cli will be the only way to use the Oracle JET tooling, e.g., to create new Oracle JET projects from that point on. Therefore, if you haven't transferred from using Yeoman and Grunt to ojet-cli yet, e.g., to command line calls such as "ojet create", take some time to move in that direction before the 7.0.0 release.

As always, your comments and constructive feedback are welcome. If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums and also follow @OracleJET on Twitter.

For organizations using Oracle JET in production, you're invited to be highlighted on the Oracle JET site, with the latest addition being a brand new Customer Success Story by Cagemini.

On behalf of the entire Oracle JET development team: "Happy coding!"

Critical Patch Update for July 2018 Now Available

Steven Chan - Wed, 2018-07-18 10:09

The Critical Patch Update (CPU) for July 2018 was released on July 17, 2018. Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. 

Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.

The Critical Patch Update Advisory is available at the following location:

It is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches.

The next four Critical Patch Update release dates are:

  • October 16, 2018
  • January 15, 2019
  • April 16, 2019
  • July 16, 2019
References Related Articles
Categories: APPS Blogs

Oracle Expands Challenger Series with Chicago Event at XS Tennis Village

Oracle Press Releases - Wed, 2018-07-18 10:00
Press Release
Oracle Expands Challenger Series with Chicago Event at XS Tennis Village Free event reaffirms Oracle’s commitment to providing unparalleled opportunities for American players

Redwood Shores, Calif.—Jul 18, 2018

Continuing its strong support for American tennis, Oracle announced today it is adding a Chicago event to the Oracle Challenger Series, to be held at XS Tennis Village September 2-9, 2018 in conjunction with the Association of Tennis Professionals (ATP) and the Women’s Tennis Association (WTA).

The Oracle Challenger Series launched earlier this year with events in Newport Beach and Indian Wells, California, with the mission of providing unparalleled opportunities for up-and-coming American tennis players to secure both ranking points and prize money.

As part of Oracle’s commitment to growing the game of tennis nationally, the Oracle Challenger Series will look to make a positive impact on the communities where its events are held by donating $5,000 to the local Chicago chapter of the National Junior Tennis and Learning (NJTL) network. The NJTL provides free or low-cost tennis and education programming to more than 225,000 under-resourced youth in the United States.

“We’re adding Challenger tournaments because American tennis players need more chances to compete at home and make a career out of the sport,’’ said Oracle CEO Mark Hurd. “Oracle also wants to improve the quality of tennis. We’re deeply committed to the sport and as part of the new event in Chicago, we’re providing assistance to create better access for Americans to play tennis.’’ 

“We are thrilled to welcome the Oracle Challenger Series to Chicago this September,” said Kamau Murray, President and CEO of XS Tennis Village and Executive Director of XS Tennis and Education Foundation. “We’re proud to work with Oracle on this great event and support the incredible work that they do to promote American tennis at all levels of the game. Their commitment to the sport goes hand-in-hand with our mission at XS to provide a positive pathway to future success through tennis.”

The Chicago tournament will be a joint ATP Challenger Tour/WTA 125K Series event and pay equal prize money ($150,000 per Tour) for a total of $300,000. Both the women’s and men’s draws will consist of 32 singles players, 16 qualifying players and 16 doubles teams. The event will be free and open to the public.

The 2018-2019 Oracle Challenger Series will begin in Chicago, with additional events to be added at a later date. The Series will culminate at the 2019 BNP Paribas Open, the largest ATP World Tour and WTA combined two-week event in the world, held annually at the Indian Wells Tennis Garden, where the two American women and men who accumulate the most points over the course of the Series will receive wildcards into their respective singles main draws.

The Oracle Challenger Series builds on Oracle’s commitment to help support U.S. tennis at both the professional and collegiate level. Oracle sponsors the Oracle US Tennis Awards, two $100,000 grants awarded annually to assist young players as they transition from college into the professional ranks. In addition to sponsoring the Intercollegiate Tennis Association rankings, Oracle also hosts the Oracle ITA Masters tournament in Malibu, California and the Oracle ITA National Fall Championships which will be held at the Surprise Tennis Center in Surprise, Arizona in 2018.

For more information about the Oracle Challenger Series, visit www.oraclechallengerseries.com.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Triggers on materialized views

Tom Kyte - Wed, 2018-07-18 07:26
Are triggers on materialized views supported by oracle? If so, is a good practice to use them?
Categories: DBA Blogs

IN & EXISTS

Tom Kyte - Wed, 2018-07-18 07:26
Tom: can you give me some example at which situation IN is better than exist, and vice versa.
Categories: DBA Blogs

Initializing Contexts after getting a connection from a ConnectionPool

Tom Kyte - Wed, 2018-07-18 07:26
I understand best practice is to initialize a connection from a connection pool by clearing the Application Context. However,there are multiple namespaces. Should every namespace be cleared? How does one find the name of every namespace?
Categories: DBA Blogs

DDL for objects

Tom Kyte - Wed, 2018-07-18 07:26
Hi, I wanted to get the DDL for all objects in a database schema. I'm aware of DBMS_METADATA.GET_DDL to get the DDL from a PL/SQL block but was facing the bellow issue: The return type is HUGECLOB and I need it in a directly viewable form simil...
Categories: DBA Blogs

Oracle Database Performance tuning using Application Developer ( Application user)

Tom Kyte - Wed, 2018-07-18 07:26
This question is from the context when App-server and Oracle Database is hosted on a vendor cloud. They are in a separate container and application will be in a separate container. Example, App-server and DB on Oracle cloud and GUI on customer's ...
Categories: DBA Blogs

V5 Systems Makes Cities Safer with Oracle Cloud

Oracle Press Releases - Wed, 2018-07-18 07:00
Press Release
V5 Systems Makes Cities Safer with Oracle Cloud Outdoor Industrial IoT platform company uses Oracle Cloud Infrastructure to quickly scale up security while dramatically reducing costs

Redwood Shores, Calif.—Jul 18, 2018

To help make cities safer quickly, V5 Systems has implemented Oracle Cloud Infrastructure as part of its security solution. As the pioneer of the world’s first self-powered outdoor security and computing platform, V5 Systems helps cities around the world address critical security issues.

Nestled in Silicon Valley, the City of Hayward was experiencing theft and drug crime around City Hall due to open areas and its close proximity to the main rail transportation for the Bay Area. They wanted to add in video surveillance as an added security layer but to do so needed access to power and communications. There was no fixed power or communications infrastructure where crime was happening and City Hall had just been renovated so trenching was not an option. The City of Hayward was able to implement V5 Systems’ portable video surveillance in less than 30 minutes per unit. Hayward avoided nearly $1 million in trenching fees and 911 calls dropped 60 percent within the first three months of deployment.

V5 Systems needed a cloud provider that met its needs of delivering real-time security to its customers. After reviewing a number of major cloud providers, V5 Systems discovered that although costs of most providers initially appeared low, the data retrieval and transmission costs critical to a video monitoring solution were high. With the enterprise-grade performance of Oracle Cloud Infrastructure, V5 Systems can scale-up any individual deployment if the processing and storage requirements of a security unit or customer increase, as well as scale-out capacity to serve additional customers as their needs grow. The company can better control its costs, and in turn, offer more affordable solutions to its end-customers.

“Our customers need consistent access to our service, and regularly monitor video, so outbound data performance and cost is important,” said Steve Yung, CEO, V5 Systems. “Traditionally video and sensor information has to run through multiple channels before first responders are notified. At a critical time, this delay could make a huge impact on the outcome of the security situation. The performance Oracle delivers has a significant impact on the outbound data so response time for our customers can be faster.”

V5 Systems rapidly enables and supports outdoor Industrial IoT applications. Its customers rely on V5 Systems’ mobile alerts to warn of threats and potential issues in real-time through 24/7 video analytics, AI-driven acoustic gunshot sensors and chemical detection. Several of its customers are using its application built on Oracle and V5 is in the process of transitioning other customers. Leveraging the flexibility and agility of the cloud, V5 Systems is also actively building tools that will spin up customized portals for new customers in minutes, significantly cutting down the traditionally lengthy process of configuring security.

“You can’t put a price on safety. V5 Systems has engineered a revolutionary power system that allows the ability to deploy sophisticated systems and computing systems wirelessly, in any outdoor environment,” said Kash Iftikhar, vice president of product and strategy, Oracle Cloud Infrastructure. “By leveraging Oracle Cloud Infrastructure, V5 Systems is able to deliver the reliability its customers need by conducting analytics and monitoring at the edge in record time so its customers can feel safer; all while achieving significant cost savings for its business.”

Contact Info
Danielle Tarp
Oracle
+1.650.506.2904
danielle.tarp@oracle.com
Quentin Nolibois
Burson-Marsteller PR for Oracle
+1.415.591.4097
quentin.nolibois@bm.com
About V5 Systems

V5 Systems is a ​California-based ​technology company that provides ​leading-edge ​portable, wireless, self-powered outdoor computing and security solutions for Industrial IoT applications. They deliver turnkey video surveillance and gunshot detection solutions that can be deployed in under 30 minutes per unit, while the computing platform itself can act as a host for 3rd party hardware and software integration. These solutions utilize a proprietary power management system which eliminates the need for fixed power and hard-wired communications. V5 Systems develops and optimizes all software and AI analytics to run at the edge, which is instrumental to delivering real-time information to its users. Working with state, local government, education and private enterprises V5 Systems delivers the next generation of Industrial IoT security and computing solutions to the outdoors.

About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure combines the benefits of public cloud (on-demand, self-service, scalability, pay-for-use) with those benefits associated with on-premises environments (governance, predictability, control) into a single offering. Oracle Cloud Infrastructure takes advantage of a high-scale, high-bandwidth network that connects cloud servers to high-performance local, block, and object storage to deliver a cloud platform that yields the highest performance for traditional and distributed applications, as well as highly available databases.

With the acquisitions of Dyn and Zenedge, Oracle Cloud Infrastructure extended its offering to include Dyn’s best-in-class DNS and email delivery solutions and Zenedge’s next-generation Web Application Firewall (WAF) and Distributed Denial of Service (DDoS) capabilities.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • +1.650.506.2904

Quentin Nolibois

  • +1.415.591.4097

Pages

Subscribe to Oracle FAQ aggregator