Feed aggregator

JET Line Chart - Step Handling

Tom Kyte - Fri, 2018-08-24 17:26
I have a problem generating the vertical lines in a line chart. E.g. take this query: with nums as ( select rownum as rnum from dual connect by rownum < 300) select rnum/9 as x, sin(2*rnum/30) as y from nums In the X-Axis, my tick mar...
Categories: DBA Blogs

Deploying SQL Server on MiniShift / RedHat OpenShift

Yann Neuhaus - Fri, 2018-08-24 06:19

Currently we begin to see customer adopting containerization for SQL Server databases (mainly driven by CI/CD and DevOps trends). A lot of them are using RedHat OpenShift as container management platform. From my side, I didn’t want to setup a complete OpenShift infrastructure on my lab to test only my SQL Server pod deployment on such infrastructure. I rather installed MiniShift that comes with one OpenShift node cluster which perfectly meets my requirements.

 

blog 143 -  0 - banner

 

I’ll be running MiniShift on my Windows 10 laptop and I will use Hyper-V as the hypervisor for Minishift. I used the following MiniShift configuration settings. Just remember that SQL Server memory requirement is 2GB so I had to increase the default setting value to 6GB to be more comfortable running my SQL Server pod. I also setup my MiniShift default folders location to another disk.

[dab@DBI-LT-DAB:#]> minishift config set vm-driver hyperv
[dab@DBI-LT-DAB:#]> minishift config set hyperv-virtual-switch Internet
[dab@DBI-LT-DAB:#]> minishift config set memory 6GB
$env:MINISHIFT_HOME="T:\minishift\"

 

Let’s start MiniShift:

[dab@DBI-LT-DAB:#]> minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'hyperv' is supported on this platform ... OK
-- Checking if Hyper-V driver is installed ... OK
-- Checking if Hyper-V driver is configured to use a Virtual Switch ...
   'Internet' ... OK
-- Checking if user is a member of the Hyper-V Administrators group ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'hyperv' hypervisor ...
-- Starting Minishift VM ................................................ OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
   Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
   Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 10% used OK
-- OpenShift cluster will be configured with ...
   Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster ...........
Deleted existing OpenShift container
Using nsenter mounter for OpenShift volumes
Using public hostname IP 192.168.0.17 as the host IP
Using 192.168.0.17 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.

The server is accessible via web console at:

https://192.168.0.17:8443

…

 

I also needed to configure docker and oc environment to get access on them from my PowerShell console.

& minishift docker-env | Invoke-Expression
& minishift oc-env | Invoke-Expression

 

Configuration done. Let’s start creating my first project then:

[dab@DBI-LT-DAB:#]> oc new-project mssqlserver --description="mssqlserver deployment on Minishift" --display-name="mssqlserver project"
Now using project "mssqlserver" on server "https://192.168.0.17:8443".

 

Let’s get a list of existing projects:

[dab@DBI-LT-DAB:#]> oc projects
You have access to the following projects and can switch between them with 'oc project <projectname>':

  * mssqlserver - mssqlserver project
    myproject - My Project

Using project "mssqlserver" on server "https://192.168.0.17:8443".

 

I will need to use an OpenShift private registry for my tests:

[dab@DBI-LT-DAB:#]> minishift openshift registry
172.30.1.1:5000

 

My OpenShift registry contains the following images by default:

[dab@DBI-LT-DAB:#]> docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             SIZE
openshift/origin-web-console       v3.9.0              aa12a2fc57f7        8 weeks ago         495MB
openshift/origin-docker-registry   v3.9.0              8e6f7a854d66        8 weeks ago         465MB
openshift/origin-haproxy-router    v3.9.0              448cc9658480        8 weeks ago         1.28GB
openshift/origin-deployer          v3.9.0              39ee47797d2e        8 weeks ago         1.26GB
openshift/origin                   v3.9.0              4ba9c8c8f42a        8 weeks ago         1.26GB
openshift/origin-pod               v3.9.0              6e08365fbba9        8 weeks ago         223MB

 

For my tests, I picked up my custom dbi services image for SQL Server used for our DMK maintenance tool. Next steps consisted in building, tagging and uploading the corresponding image to my OpenShift integrated registry. Image tagging was done with the [registry_ip]:[port]/[project]/[image]/[tag] pattern:

[dab@DBI-LT-DAB:#]> docker tag dbi/dbi_linux_sql2017:2017-CU4 172.30.1.1:5000/mssqlserver/dbi_linux_sql2017:2017-CU4
[dab@DBI-LT-DAB:#]> docker images
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
172.30.1.1:5000/mssqlserver/dbi_linux_sql2017   2017-CU4            d37ecabe87e8        22 minutes ago      1.42GB
dbi/dbi_linux_sql2017                     2017-CU4            d37ecabe87e8        22 minutes ago      1.42GB

[dab@DBI-LT-DAB:#]> docker push 172.30.1.1:5000/mssqlserver/dbi_linux_sql2017:2017-CU4
The push refers to a repository [172.30.1.1:5000/mssqlserver/dbi_linux_sql2017]
2e3c7826613e: Pushed
66ccaff0cef8: Pushed
…

 

My custom image is now available as image stream on OpenShift.

Go ahead and let’s try first to deploy my SQL Server pod from the mssqlserver project through the web console. The task is easy. You just have to choose deployment from an image and search then the corresponding image available as imagestream in your OpenShift integrated registry. In my case deployment was ok after configuring some environment variable values.

blog 143 - 1 - mssql pod

blog 143 - 3 - mssql deployment variables

From the web console you have access to pod logs. In my case, it corresponds to the SQL Server error log during the startup phase. My custom image includes creating a custom dbi_tools database as well as installing tSQLt framework.

blog 143 - 4 - mssql logs

The final step consists in exposing the SQL Server pod to outside world (not by default):

[dab@DBI-LT-DAB:#]> oc get pod
NAME                        READY     STATUS    RESTARTS   AGE
dbi-linux-sql2017-1-9vvfw   1/1       Running   0          1h

C:\Users\dab\Desktop
[dab@DBI-LT-DAB:#]> oc port-forward dbi-linux-sql2017-1-9vvfw 1433:1433

 

Let’s try a connection from mssql-cli tool:

[dab@DBI-LT-DAB:$]> mssql-cli -S 127.0.0.1 -U sa -P Password1
Version: 0.15.0
Mail: sqlcli@microsoft.com
Home: http://github.com/dbcli/mssql-cli
master> select *
....... from sys.dm_os_host_info;
+-----------------+---------------------+----------------+---------------------------+------------+-----------------------+
| host_platform   | host_distribution   | host_release   | host_service_pack_level   | host_sku   | os_language_version   |
|-----------------+---------------------+----------------+---------------------------+------------+-----------------------|
| Linux           | Ubuntu              | 16.04          |                           | NULL       | 0                     |
+-----------------+---------------------+----------------+---------------------------+------------+-----------------------+
(1 row affected)
Time: 0.405s
master>

 

Done!

This is my first deployment but we can do better here. Indeed, in my previous scenario, I didn’t setup persistent volume to host my database files or I didn’t use OpenShift secrets to protect my credential information. Let’ do it!

Let’s create first a persistent volume. Developer user doesn’t have permissions to manage volume on the cluster so let’s switch to the system user:

[dab@DBI-LT-DAB:#]> oc login -u system:admin
Logged into "https://192.168.0.17:8443" as "system:admin" using existing credentials.

 

OpenShift runs on the top of K8s which is object-oriented. Objects can be deployed from deployment files as well and this is definitely my favorite path currently for many reasons. I configured both PersistentVolume and PersistentVolumeClaim objects in a deployment file as follows. Note the hostPath value corresponds to a local path in the MiniShift cluster I setup in a previous step.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-data-sql
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: slow
  hostPath:
    path: /mnt/sda1/var/lib/minishift/openshift.local.pv/pv0069
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv-claim-data-sql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: slow
selector:
  name: pv-data-sql

 

Let’s deploy both my persistent volume and persistent volume claim …

[dab@DBI-LT-DAB:#]> oc create -f .\docker_openshift_storage.yaml
persistentvolume "pv-data-sql" created
persistentvolumeclaim "pv-claim-data-sql" created

 

… and get status of my persistent volume deployment

[dab@DBI-LT-DAB:#]> oc get pvc
NAME                STATUS    VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pv-claim-data-sql   Bound     pv-data-sql   5Gi        RWO            hostpath       1m
[dab@DBI-LT-DAB:#]> oc get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                           STORAGECLASS   REASON    AGE
pv-data-sql   5Gi        RWO            Retain           Bound       mssqlserver/pv-claim-data-sql   hostpath                 1m
…

 

“Bound” status indicates that everything seems to be ok.

Let’s continue and let’s add an OpenShift secret from my deployment file:

apiVersion: v1
kind: Secret
metadata:
  name: mssql-env
stringData:
  MSSQL_SA_PASSWORD: Password1

[dab@DBI-LT-DAB:#]> oc create -f .\docker_openshift_mssql_secret.yaml
secret "mssql-env" created
C:\Users\dab\Desktop
[dab@DBI-LT-DAB:#]> oc get secret
NAME                       TYPE                                  DATA      AGE
…
mssql-env                  Opaque                                1         1h

 

At this step, you have different ways to deploy a pod so I finally use a deployment configuration file as follows:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  labels:
    app: mssql
  name: dbi-linux-sql2017
  namespace: mssqlserver
spec:
  replicas: 1
  selector:
    app: mssql
    deploymentconfig: dbi-linux-sql2017
  strategy:
    type: Rolling
  template:
    metadata:
      labels:
        app: mssql
        deploymentconfig: dbi-linux-sql2017
    spec:
      containers:
        - env:
            - name: ACCEPT_EULA
              value: 'Y'
            - name: DMK
              value: 'Y'
            - name: MSSQL_SA_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: MSSQL_SA_PASSWORD
                  name: mssql-env
          envFrom:
            - secretRef:
                name: mssql-env
          image: 
            172.30.1.1:5000/mssqlserver/dbi_linux_sql2017:2017-CU4
          imagePullPolicy: Always
          name: dbi-linux-sql2017
          ports:
            - containerPort: 1433
              protocol: TCP
          volumeMounts:
            - mountPath: /var/opt/mssql/
              name: volume-x1d5y
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      volumes:
        - name: volume-x1d5y
          persistentVolumeClaim:
            claimName: pv-claim-data-sql
  triggers:
    - type: ConfigChange
    - imageChangeParams:
        automatic: true
        containerNames:
          - dbi-linux-sql2017
        from:
          kind: ImageStreamTag
          name: 'dbi_linux_sql2017:2017-CU4'
          namespace: mssqlserver
      type: ImageChange

 

… To deploy my SQL Server pod:

[dab@DBI-LT-DAB:#]> oc create -f .\deployment-config-mssql.yml
deploymentconfig "dbi-linux-sql2017" created

 

Once again, I exposed the corresponding service port to connect from my laptop and connection to my SQL Server pod was successful again. Note that the pod is different from the first time. Updating my configuration led K8s to spin-up another container in this case.

[dab@DBI-LT-DAB:#]> oc get pod
NAME                        READY     STATUS    RESTARTS   AGE
dbi-linux-sql2017-1-9ddfbx   1/1       Running   0          1h

C:\Users\dab\Desktop
[dab@DBI-LT-DAB:#]> oc port-forward dbi-linux-sql2017-9ddfbx 1433:1433
Forwarding from 127.0.0.1:1433 -> 1433

 

Finally let’s take a look at the MiniShift cluster storage layer to get a picture of my SQL Server database files including data, log and secrets under /var/opt/mssql:

[dab@DBI-LT-DAB:#]> minishift ssh

[root@minishift ~]# ll /mnt/sda1/var/lib/minishift/openshift.local.pv/pv0069/
total 0
drwxr-xr-x. 2 root root 202 Aug 23 15:07 data
drwxr-xr-x. 2 root root 232 Aug 23 15:18 log
drwxr-xr-x. 2 root root  25 Aug 23 15:06 secrets

 

I was quick on some topics in this write-up that deserves probably to dig further into details and there are other ones to investigate. I will get other opportunities to share my thoughts on it in a context of SQL Server database scenarios. Stay tuned!

 

 

 

 

 

 

 

 

 

Cet article Deploying SQL Server on MiniShift / RedHat OpenShift est apparu en premier sur Blog dbi services.

Error Logging

Jonathan Lewis - Fri, 2018-08-24 05:19

Error logging is a topic that I’ve mentioned a couple of times in the past, most recently as a follow-up in a discussion of the choices for copying a large volume of data from one table to another, but originally in an addendum about a little surprise you may get when you use extended strings (max_string_size = EXTENDED).

If you use the default call to dbms_errlog.create_error_log() to create an error logging table then Oracle will create a table with a few columns of its own plus every column (name) that you have in your original table – but it will create your columns as varchar2(4000), or nvarchar2(2000), or raw(2000) – unless you’ve set the max_string_size to extended.  Here’s a simple  demo script with results from two different systems, one with the default setting the other with the extended setting (note, there’s a little inconsistency in handling raw() columns.


rem
rem     Script:         log_errors_min.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem     Purpose:
rem

create table t1 (
        v1      varchar2(10),
        n1      number(2,0),
        d1      date,
        nv1     nvarchar2(10),
        r1      raw(10)
);


execute dbms_errlog.create_error_log('t1')

desc err$_t1


max_string_size = STANDARD
--------------------------
 Name			       Null?	Type
 ----------------------------- -------- --------------------
 ORA_ERR_NUMBER$			NUMBER
 ORA_ERR_MESG$				VARCHAR2(2000)
 ORA_ERR_ROWID$ 			ROWID
 ORA_ERR_OPTYP$ 			VARCHAR2(2)
 ORA_ERR_TAG$				VARCHAR2(2000)
 V1					VARCHAR2(4000)
 N1					VARCHAR2(4000)
 D1					VARCHAR2(4000)
 NV1					NVARCHAR2(2000)
 R1					RAW(2000)


max_string_size = EXTENDED
--------------------------
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ORA_ERR_NUMBER$                        NUMBER
 ORA_ERR_MESG$                          VARCHAR2(2000)
 ORA_ERR_ROWID$                         ROWID
 ORA_ERR_OPTYP$                         VARCHAR2(2)
 ORA_ERR_TAG$                           VARCHAR2(2000)
 V1                                     VARCHAR2(32767)
 N1                                     VARCHAR2(32767)
 D1                                     VARCHAR2(32767)
 NV1                                    NVARCHAR2(16383)
 R1                                     RAW(32767)

Every single “original” column that appears in this table will be a LOB, with an inline LOB locator of 30 or more bytes. (At least, that’s the 12.1.0.2 implementation, I haven’t checked for 12.2 or 18.3).

If this is going to be a problem (e.g. you have a table defined with 500 columns but only use 120 of them) you can create a minimalist error logging table. Provided you create it with the ora_err% columns suitably defined you can add only those columns you’re really interested in (or feel threatened by), and you don’t have to declare them at extreme lengths. e.g.


create table err$_special (
        ora_err_number$         number,
        ora_err_mesg$           varchar2(2000),
        ora_err_rowid$          rowid,
        ora_err_optyp$          varchar2(2),
        ora_err_tag$            varchar2(2000),
        n1                      varchar2(128)
)
;

insert into t1 values(1,'abc','02-jan-1984',sys_op_c2c('abc'),hextoraw('0xFF')) 
log errors into err$_special
reject limit unlimited
;

execute print_table('select * from err$_special')


ORA_ERR_NUMBER$               : 1722
ORA_ERR_MESG$                 : ORA-01722: invalid number

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
N1                            : abc


If you try to create an error logging table that doesn’t include the 5 critical columns you’ll see Oracle error ORA-38900: missing mandatory column “ORA_ERR_{something}” of error log table “{your logging table name}” when you try to log errors into it, and the 5 critical columns have to be the first 5 columns (in any order) in the table or you’ll get Oracle error ORA-38901: column “ORA_ERR_{something}$” of table “{your logging table name}” when you try to log errors into it.

Purpose of Mockups and different elements

Nilesh Jethwa - Thu, 2018-08-23 23:07

The Elements and Purpose of Mockups So you’re done with the tedious process of defining and creating the structure of your website in that design part called wireframing. The stockholders are quite impressed. Now, you’re on to the next step … Continue reading ?

Hat Tip To: MockupTiger Wireframes

RMAN: Synchronize standby database using production archivelog backupset

Michael Dinh - Thu, 2018-08-23 22:07

If you have not read RMAN: Synchronize standby database using production archivelog, then please do so.

# Primary archivelog is on local vs shared storage.
# Primary RMAN archivelog backupset resides on shared folder with Standby.
# Full backup is performed once per day and include archivelog with format arch_DB02_`date '+%Y%m%d'
# MANAGED REAL TIME APPLY is running.
PRI: /shared/prod/DB02/rman/
SBY: /shared/backup/arch/DB02a/

#!/bin/sh -e
# Michael Dinh: Aug 21, 2018
# RMAN sync standby using production archivelog backupset
#
. ~/working/dinh/dinh.env
. ~/working/dinh/DB02a.env
sysresv|tail -1
set -x
# List production archivelog backupset for current day
ls -l /shared/prod/DB02/rman/arch_DB02_`date '+%Y%m%d'`*
# Copy production archivelog backupset for current day to standby
cp -ufv /shared/prod/DB02/rman/arch_DB02_`date '+%Y%m%d'`* /shared/backup/arch/DB02a
rman msglog /tmp/rman_sync_standby.log > /dev/null << EOF
set echo on;
connect target;
show all;
# Catalog production archivelog backupset from standby
catalog start with '/shared/backup/arch/DB02a' noprompt;
# Restore production archivelog backupset to standby
restore archivelog from time 'trunc(sysdate)-1';
exit
EOF
sleep 15m
# Verify Media Recovery Log from alert log
tail -20 $ORACLE_BASE/diag/rdbms/$ORACLE_UNQNAME/$ORACLE_SID/trace/alert_$ORACLE_SID.log
exit
$ crontab -l
00 12 * * * /home/oracle/working/dinh/rman_sync_standby.sh > /tmp/rman_sync_standby.sh.out 2>&1

$ ll /tmp/rman*
-rw-r--r--. 1 oracle oinstall 7225 Aug 22 12:01 /tmp/rman_sync_standby.log
-rw-r--r--. 1 oracle oinstall 4318 Aug 22 12:16 /tmp/rman_sync_standby.sh.out

+ tail -20 /u01/app/oracle/diag/rdbms/DB02a/DB02a2/trace/alert_DB02a2.log
ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  
ORA-1153 signalled during: ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  ...
Tue Aug 21 15:41:27 2018
Using STANDBY_ARCHIVE_DEST parameter default value as USE_DB_RECOVERY_FILE_DEST
Tue Aug 21 15:54:30 2018
db_recovery_file_dest_size of 204800 MB is 21.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Aug 22 12:01:21 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31636.1275.984830461
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31637.1276.984830461
Wed Aug 22 12:01:46 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31638.1278.984830487
Wed Aug 22 12:01:58 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31639.1277.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31640.1279.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31641.1280.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31642.1281.984830489
Media Recovery Waiting for thread 1 sequence 31643
+ exit

# Manual recovery: WAIT_FOR_LOG and BLOCK#=0 and never increment.
SQL> r
  1  select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where 1=1
  4  and status not in ('CLOSING','IDLE','CONNECTED')
  5  order by status desc, thread#, sequence#
  6*

                        CLIENT                                               DELAY
     PID  INST  THREAD# PROCESS    PROCESS  STATUS       SEQUENCE#   BLOCK#   MINS
-------- ----- -------- ---------- -------- ------------ --------- -------- ------
   94734     2        1 N/A        MRP0     WAIT_FOR_LOG     31643        0      0

SQL>
$ cat /tmp/rman_sync_standby.log 

Recovery Manager: Release 11.2.0.4.0 - Production on Wed Aug 22 12:00:58 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> 
echo set on

RMAN> connect target;
connected to target database: DB02 (DBID=1816794213, not open)

RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name DB02A are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 7;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE 'SBT_TAPE' BACKUP TYPE TO BACKUPSET PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/u01/app/oracle/product/11.2.0/dbhome_1/lib/libddobk.so, ENV=(STORAGE_UNIT=dd-u99,BACKUP_HOST=dd860.ccx.carecentrix.com,ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/db/11.2.0.4/dbs/snapcf_DB02a2.f'; # default

RMAN> catalog start with '/shared/backup/arch/DB02a' noprompt;
searching for all files that match the pattern /shared/backup/arch/DB02a

List of Files Unknown to the Database
=====================================
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1

RMAN> restore archivelog from time 'trunc(sysdate)-1';
Starting restore at 22-AUG-2018 12:01:00
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=285 instance=DB02a2 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=3 instance=DB02a2 device type=DISK

archived log for thread 1 with sequence 31630 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31630.496.984755257
archived log for thread 1 with sequence 31631 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31631.497.984755273
archived log for thread 1 with sequence 31632 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31632.498.984755273
archived log for thread 1 with sequence 31633 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31633.499.984755275
archived log for thread 1 with sequence 31634 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31634.500.984755275
archived log for thread 1 with sequence 31635 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31635.501.984755275
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=31636
channel ORA_DISK_1: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31637
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1
channel ORA_DISK_1: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1 tag=TAG20180822T110121
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=31638
channel ORA_DISK_1: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1 tag=TAG20180822T110121
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:25
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31639
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1 tag=TAG20180822T113906
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31640
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31641
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1 tag=TAG20180822T113906
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31642
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1 tag=TAG20180822T113906
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1 tag=TAG20180822T110121
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:17
Finished restore at 22-AUG-2018 12:01:44

RMAN> exit
$ cat /tmp/rman_sync_standby.sh.out 
ORACLE_SID = [oracle] ? The Oracle base has been set to /u01/app/oracle
Oracle Instance alive for sid "DB02a2"
CURRENT_INSTANCE=DB02a2
ORACLE_UNQNAME=DB02a
OTHER_INSTANCE=DB02a3,DB02a4
ORACLE_SID=DB02a2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/db/11.2.0.4
NLS_DATE_FORMAT=DD-MON-YYYY HH24:MI:SS
Oracle Instance alive for sid "DB02a2"
++ date +%Y%m%d
+ ls -l /shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1
-rw-r-----. 1 oracle dba 1900124160 Aug 22 11:02 /shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1
-rw-r-----. 1 oracle dba 1938098176 Aug 22 11:02 /shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1
-rw-r-----. 1 oracle dba 1370842112 Aug 22 11:01 /shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1
-rw-r-----. 1 oracle dba   11870720 Aug 22 11:39 /shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1
-rw-r-----. 1 oracle dba       3584 Aug 22 11:39 /shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1
-rw-r-----. 1 oracle dba       3072 Aug 22 11:39 /shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1
++ date +%Y%m%d
+ cp -ufv /shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1 /shared/backup/arch/DB02a
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1\u2019
+ rman msglog /tmp/rman_sync_standby.log
+ sleep 15m
+ tail -20 /u01/app/oracle/diag/rdbms/DB02a/DB02a2/trace/alert_DB02a2.log
ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  
ORA-1153 signalled during: ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  ...
Tue Aug 21 15:41:27 2018
Using STANDBY_ARCHIVE_DEST parameter default value as USE_DB_RECOVERY_FILE_DEST
Tue Aug 21 15:54:30 2018
db_recovery_file_dest_size of 204800 MB is 21.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Aug 22 12:01:21 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31636.1275.984830461
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31637.1276.984830461
Wed Aug 22 12:01:46 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31638.1278.984830487
Wed Aug 22 12:01:58 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31639.1277.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31640.1279.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31641.1280.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31642.1281.984830489
Media Recovery Waiting for thread 1 sequence 31643
+ exit

New Zealand: “Oracle Indexing Internals and Best Practices” Seminars November 2018.

Richard Foote - Thu, 2018-08-23 17:37
Good news for those of you in beautiful New Zealand. Due to popular demand, I’ll be returning to run some of my acclaimed “Oracle Indexing Internals and Best Practices” seminars in November 2018. The dates and events are: Wellington: 19-20 November 2018: Registration Here or Buy Directly Here Auckland: 21-22 November 2018: Registration Here or […]
Categories: DBA Blogs

Oracle 18c DataGuard with Nologging Mode

Yann Neuhaus - Thu, 2018-08-23 15:04

With Oracle 18c database nologging has been extended with two new modes: Standby Nologging for Load Performance and Standby Nologging for Data Availability. These modes provide better support for use in an Oracle Active Data Guard environment without significantly increasing the amount of redo generated. In the documentation we can find following definitions:

FORCE LOGGING mode prevents any load operation from being performed in a nonlogged manner. This can slow down the load process because the loaded data must be copied into the redo logs.

STANDBY NOLOGGING FOR DATA AVAILABILITY mode causes the load operation to send the loaded data to each standby through its own connection to the standby. The commit is delayed until all the standbys have applied the data as part of running managed recovery in an Active Data Guard environment.

STANDBY NOLOGGING FOR LOAD PERFORMANCE is similar to the previous mode except that the loading process can stop sending the data to the standbys if the network cannot keep up with the speed at which data is being loaded to the primary. In this mode it is possible that the standbys may have missing data, but each standby automatically fetches the data from the primary as a normal part of running managed recovery in an Active Data Guard environment.

In this Blog I am doing a test with the mode STANDBY NOLOGGING FOR LOAD PERFORMANCE. I am using two virtual machines.
This mode is enabled in the primary database using following command.

SQL> ALTER DATABASE SET STANDBY NOLOGGING FOR LOAD PERFORMANCE;

Database altered.

SQL> select db_unique_name,force_logging from v$database;

DB_UNIQUE_NAME                 FORCE_LOGGING
------------------------------ ---------------------------------------
CONT18C_SITE                   STANDBY NOLOGGING FOR LOAD PERFORMANCE

And then we build a Data Guard environment (steps not shown). Below the configuration of the Data Guard.

DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 3 seconds ago)

DGMGRL>

Now that the Data Guard is build, let’s do some nologging operation in the primary and let’s see if there are replicated on the standby

SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> create tablespace TBS_NOLOG datafile '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf' size 5M nologging;

Tablespace created.
SQL> create table testnlog nologging tablespace TBS_NOLOG as select * from hr.EMPLOYEES;

Table created.

SQL>

In the standby when we select the query we got following errors

SQL> select db_unique_name,force_logging,open_mode from v$database;

DB_UNIQUE_NAME  FORCE_LOGGING                           OPEN_MODE
--------------- --------------------------------------- --------------------
CONT18C_SITE1   STANDBY NOLOGGING FOR LOAD PERFORMANCE  READ ONLY WITH APPLY

SQL> show con_name

CON_NAME
------------------------------
PDB1


SQL> select count(*) from testnlog;
select count(*) from testnlog
       *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 13, block # 163)
ORA-01110: data file 13: '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

Why the replication did not work. In fact in this licencing document https://docs.oracle.com/en/database/oracle/oracle-database/18/dblic/Licensing-Information.html#GUID-0F9EB85D-4610-4EDF-89C2-4916A0E7AC87

We can verify that the feature Oracle Data Guard—Automatic Correction of Non-logged Blocks at a Data Guard Standby Database is available on Oracle Database Enterprise Edition on Engineered Systems (EE_ES) and in the Cloud.

So let’s simulate an Exadata using the hidden parameter “_exadata_feature_on” on both servers

SQL> alter system set "_exadata_feature_on"=true scope=spfile;

System altered.

After both databases restart let’s ty again
On the primary database

SQL> drop table testnlog;

Table dropped.

SQL> create table testnlog nologging tablespace TBS_NOLOG as select * from hr.EMPLOYEES;

Table created.

SQL>

On the standby, we can see that nologged data are now replicated

SQL> select db_unique_name,force_logging,open_mode from v$database;

DB_UNIQUE_NAME  FORCE_LOGGING                           OPEN_MODE
--------------- --------------------------------------- --------------------
CONT18C_SITE1   STANDBY NOLOGGING FOR LOAD PERFORMANCE  READ ONLY WITH APPLY

SQL>

SQL> select count(*) from testnlog;

  COUNT(*)
----------
       107

SQL>

Conclusion :
With Oracle 18c, we can now configure database with no-logging mode. But this feature is only supported on following editions :

Oracle Database Enterprise Edition on Engineered Systems On-Premises EE-ES Oracle Database Enterprise Edition software installed on an on-premises engineered system (Oracle Exadata Database Machine or Oracle Database Appliance). Oracle Database Cloud Service Enterprise Edition Cloud DBCS EE Includes Oracle Database Enterprise Edition software. Oracle Database Cloud Service Enterprise Edition – High Performance Cloud DBCS EE-HP Includes Oracle Database Enterprise Edition software plus many Oracle Database options and Oracle management packs. Oracle Database Cloud Service Enterprise Edition – Extreme Performance Cloud DBCS EE-EP Includes Oracle Database Enterprise Edition software plus all Oracle Database options and Oracle management packs that are appropriate for use in Oracle Database Cloud Service. Oracle Database Exadata Cloud Service Cloud ExaCS Includes Oracle Database Enterprise Edition software plus all Oracle Database options and Oracle management packs that are appropriate for use in Oracle Database Exadata Cloud Service.

The licensing policies for ExaCS also apply to Oracle Database Exadata Cloud at Customer.

 

 

Cet article Oracle 18c DataGuard with Nologging Mode est apparu en premier sur Blog dbi services.

Chatbots: What time is it?

Luc Bors - Thu, 2018-08-23 09:24

Looker for OBIEE Experts: Introduction and Concepts

Rittman Mead Consulting - Thu, 2018-08-23 08:28
 Introduction and Concepts

Recently I've been doing some personal study around various areas including streaming, machine learning and data visualization and one of the tools that got my attention is Looker. I've initially heard about Looker from a Drill to Detail podcast and increasingly been hearing about it in conferences and use cases together with other cloud solutions like BigQuery, Snowflake and Fivetran.

I decided to give it a try myself and, since most of my career was based on Oracle Business Intelligence (OBI) writing down a comparison between the tools that could help others sharing my experience getting introduced to Looker.

OBIEE's Golden Feature: The Semantic Model

As you probably know if you have been working with OBIEE for some time the centrepiece of its architecture is the Semantic Model contained in the Repository (RPD)

 Introduction and Concepts

In the three layers of the RPD, we model our source data (e.g. database tables) into attributes, metrics, hierarchies which can then be easily dragged and dropped by the end-user in the analysis or data visualization.

I called the RPD "OBIEE's Golden Feature" because to me it's the main benefit of the platform: abstracting the data complexity from end-users and, at the same time, optimizing the query definition to take care of all the features that could be set in the datasource. The importance of the RPD is also its centrality: within the traditional OBIEE all Analysis and Dashboard had to be based on Subject Areas exposed by the RPD meaning that the definition of the metrics was done in a unique place in a consistent manner and then spread across all the reporting providing the unique source of truth for the important KPIs in the company typical of what Gartner calls the Mode 1 Analytics.

RPD Development Speed Limitation and Mode 2 Analytics

The RPD is a centralized binary object within the OBIEE infrastructure: in order to develop and test a full OBIEE instance is required, and the merges between different streams are natively performed via the RPD's admin tool.

This complexity unified to the deep knowledge required to correctly build a valid semantic model limits the number of people being able to create and publish new content thus slowing down the process from data to insights typical of the centralized Mode 1 Analytic platform provided centrally by IT teams. Moreover, RPD development is entirely point-and-click within the admintool which is somehow considered slow and old fashion in a world of scripting, code versioning and git merging. Several solutions are out in the market (including Rittman Mead Developer Toolkit) to enhance the agility of the development but still, the skills and the toolset required to develop new content makes it a purely IT manageable solution.

In order to overcome this limitation several tools like Tableau, QlikView or Oracle's Data Visualization (included in OAC or in the Desktop version) give all the power in the ends of the end-user: from data-sources to graphing, the tools allow an end-to-end data discovery to visualization journey. The problem with those tools (called Mode 2 Analytics by Gartner) is that there is no central definition of the KPI since it's demanded to every analyst. All those tools are addressing the problem by providing some sort of datasource certification allowing a datasource to be visible and reusable publicly only when it's validated centrally. Again, for most of those tools, the modelling is done in a visual format, which makes it difficult to debug, version control and automate. I've been speaking about this subject in my presentation "DevOps and OBIEE do it before it's too late".

What if we could provide the same centralized source of truth data modelling with an easily scriptable syntax that can be developed from business users without any deep knowledge of SQL or source tables? Well, what we just described is LookML!

LookML

LookerML takes the best part of OBIEE: the idea of a modelling layer and democratizes it in order to be available to all business user with a simple language and set of concepts. Moreover, the code versioning is embedded in the tool, so there's no need to teach git branch, commit, push or pull to non-IT people.

So, what are the concepts behing LookerML and how can you get familiar with it when comparing it to the medatada modelling in the RPD?

LookML Concepts

Let's start from the basic of the RPD modelling: a database table. In LookerML each table is represented by an object called View (naming is a bit confusing). Moreover, LookerML's Views can be used not only to map existing database tables but also to create new tables based on existing content and a SQL definition, like the opaque views in OBIEE. On top of this LookML allows the phisicalization of those objects (into a table) and the definition of a schedule for the refresh. This concept is very useful when aggregates are needed, the aggregate definition (SQL) is defined within the LookML View together with the related refresh schedule.

 Introduction and Concepts

The View itself defines only the source, a bit like the RPD's physical layer, the next step is defining how multiple Views interact within each other, or, in OBIEE terms, the Business Layer. In LookML there is an entity called Explores and is the place where we can define which Views we want to group together, and what's the linkage between them. Multiple Explores are defined in a Model, which should be unique per database. So, in OBIEE words, a Model can be compared to a Business Model with Explores being a subset of Facts and Dimensions grouped in a Subject Area.

 Introduction and Concepts

Ok, all "easy" so far, but where do we map the columns? and where do we set the aggregations? As you might expect both are mapped within a LookML View into Fields. Fields is a generic term which includes in both metrics and attributes, LookML naming is the below:

  • Dimension: in OBIEE's terms attributes of a dimension. The terminology is confusing since in LookML the Dimension is the column itself while in OBIEE terms is the table. A Dimension can be a column value or a combination of multiple values (like OBIEE's BM Logical Sources formulas). A Dimension in LookML can't have any aggregation (as in OBIEE).
  • Measures: in OBIEE's terms a metric. The definition includes, the source formula in SQL syntax, the type of aggregation (min/max/count...) and the drill fields.
    Filters: this is not something usually defined in OBIEE's RPD, filters are a way of passing a user choice based on a column value back to an RPD calculation formula, a bit like, for the OBIEE experts, overriding session variables with dashboard prompt values.
  • Parameters: again this is not something usually defined in OBIEE's RPD, you can think a Parameter as a way of setting up variables function. E.g. a Parameter with values SUM, AVG, MIN, MAX could be used to change how a certain Measure is aggregated

All good so far? Stick with me and in the future we'll explore more about LookML syntax and Looker in general!

Categories: BI & Warehousing

Oracle Innovation Lab Drives Digital Transformation in Construction and Engineering

Oracle Press Releases - Thu, 2018-08-23 07:00
Press Release
Oracle Innovation Lab Drives Digital Transformation in Construction and Engineering New facility empowers development of solutions shaping the future of project delivery

Deerfield, Ill.—Aug 23, 2018

Oracle Construction and Engineering today unveiled its Innovation Lab, a unique facility designed to accelerate efforts to help project- and asset-intensive organizations explore the latest technologies and drive digital transformation.

A simulated project worksite with integrated technologies, the Construction and Engineering Innovation Lab enables visitors to interact with leading-edge solutions, including connected devices, autonomous vehicles, drones, augmented reality, visualization, and artificial intelligence tools. By presenting these hands-on experiences within a simulated connected worksite, the Innovation Lab is able to powerfully bring to life the performance improvements and data insights these technologies can deliver.

“Advances in technology are reshaping the industry landscape at an accelerating pace. Our Innovation Lab enables organizations to experience firsthand how new technologies can help drive better project outcomes in critical areas such as safety, productivity and quality,” said Mike Sicilia, senior vice president and general manager of Oracle Construction and Engineering. “We are excited to welcome our current and prospective customers, partners, the academic community and others to our Innovation Lab to help them experience the future of projects.”    

The vision for the Innovation Lab, which is located outside Chicago in Deerfield, Ill., was shaped in part by input from several Oracle Construction and Engineering customers as well as technology providers. The Innovation Lab experiences will feature technologies that integrate with Oracle solutions to enable collaboration and unlock critical project intelligence to enhance outcomes and drive continuous improvement.

Technology providers that will participate in the Innovation Lab at launch include:

  • Assemble Systems, an Autodesk company, provides a SaaS solution that enables construction professionals to condition, query and connect BIM data to key workflows across bid management, estimating, scheduling, site management and finance.

  • Bosch is empowering more productivity on the job site through their connected tools and asset solutions.

  • DAQRI empowers workforces by linking digital content to the real world to accelerate productivity, communication, and key business processes.

  • HERE, the Open Location Platform company, enables people, businesses and cities to harness the power of location. By making sense of the world through the lens of location, HERE empowers its customers to achieve better outcomes.

  • Jovix, a material readiness application developed by Atlas RFID for the construction market, keeps crews productive with accurate and real-time information about material availability relative to the construction plan and schedule.

  • Reconstruct provides a 3D timeline that tracks visual progress, labor productivity, and predictive analytics that empower executives and their project teams to take actions to stay on time and on budget.

  • Triax’s platform connects the construction jobsite and provides real-time visibility into workers, safety, equipment, and asset management.

  • In addition, the Innovation Lab will feature a demonstration of Oracle Live Experience Cloud, showcasing modern engagement for cross-worksite experiences.

“Oracle Construction and Engineering continues to drive digital transformation with initiatives such as our new Innovation Lab, empowering customers with the technology and insights they need to improve outcomes across the project and asset lifecycle. Oracle will continue to collaborate with our customers and partners to foster new advancements that help organizations navigate challenges and capitalize on new opportunities,” added Sicilia.

Contact Info
Judi Palmer
Oracle
+1.650.784.7901
judi.palmer@oracle.com
Brent Curry
H+K Strategies
+312.255.3086
brent.curry@hkstrategies.com
About Oracle Construction and Engineering

Asset owners and project leaders rely on Oracle Construction and Engineering solutions for the visibility and control, connected supply chain, and data security needed to drive performance and mitigate risk across their processes, projects, and organization. Our scalable cloud solutions enable digital transformation for teams that plan, build, and operate critical assets, improving efficiency, collaboration, and change control across the project lifecycle. www.oracle.com/construction-and-engineering.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1.650.784.7901

Brent Curry

  • +312.255.3086

RMAN catalog upgrade, why, when and how

Yann Neuhaus - Thu, 2018-08-23 05:54

One of our customer has been initially creating a RMAN catalog on an Oracle database release 12.1.0.2.0 and was now intending to register new Oracle 12.2.0.1.0 databases.

Registering the databases will be failing with errors :

PL/SQL package RCAT.DBMS_RCVCAT version 12.01.00.02 in RCVCAT database is too old
RMAN-06429: RCVCAT database is not compatible with this version of RMAN

The problem is coming from the catalog version that needs to be at least equal or higher than the database version to register. I had been then wondering if we are talking about the version of the catalog database itself or the version of the catalog.

Fortunately, in most of the cases a catalog database upgrade is not needed and a catalog upgrade is enough.

RMAN Compatibility Matrix (Doc ID 73431.1) MOS Note will provide the below compatibility matrix.

Target/Auxiliary
Database
RMAN Executable Catalog Database Catalog Schema 8.1.7.4 8.1.7.4 >=8.1.7 < 12C 8.1.7.4 8.1.7.4 8.1.7.4 >=8.1.7 < 12C >=9.0.1.4 9.0.1 9.0.1 >=8.1.7 < 12C >= RMAN executable 9.2.0 >=9.0.1.3 and <= Target database >=8.1.7 < 12C >= RMAN executable 10.1.0.5 >=10.1.0.5 and <= Target database >=10.1.0.5 >= RMAN executable 10.2.0 >=10.1.0.5 and <= target database >=10.1.0.5 >= RMAN executable 11.1.0 >=10.1.0.5 and <= target database >=10.2.0.3 (note 1) >= RMAN executable 11.2.0 >=10.1.0.5 and <= target database >=10.2.0.3 (note 1) >= RMAN executable >=12.1.0.x = target database executable >=10.2.0.3 >= RMAN executable 18.1 = target database executable >=10.2.0.3 >= RMAN executable

So, in our case, we will connect from the database to be registered (R12.2.0.1.0) in order to :

  1. Check the release of the catalog
    dbiservices@<server_name>:C:\Windows\system32\ [DB_NAME] sqlplus <catalog_user>/<pwd>@<catalog_TNS>
    
    SQL*Plus: Release 12.2.0.1.0 Production on Thu Aug 16 14:50:08 2018
    
    Connected to:
    
    Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
    
    SQL> select * from rcver;
    
    VERSION
    
    ------------
    
    12.01.00.02
  2. Upgrade the catalog version
    dbiservices@<server_name>:C:\Windows\system32\ [DB_NAME] rman target / catalog <catalog_user>/<pwd>@<catalog_TNS>
    
    Recovery Manager: Release 12.2.0.1.0 - Production on Thu Aug 16 14:52:15 2018
    
    connected to target database: ARGOSP (DBID=469810750)
    
    connected to recovery catalog database
    
    PL/SQL package <catalog_user>.DBMS_RCVCAT version 12.01.00.02. in RCVCAT database is too old
    
    RMAN> upgrade catalog;
    
    recovery catalog owner is <catalog_user>
    
    enter UPGRADE CATALOG command again to confirm catalog upgrade
    
    RMAN> upgrade catalog;
    
    recovery catalog upgraded to version 12.02.00.01
    
    DBMS_RCVMAN package upgraded to version 12.02.00.01
    
    DBMS_RCVCAT package upgraded to version 12.02.00.01.
  3. Check the release of the catalog
dbiservices@<server_name>:C:\Windows\system32\ [DB_NAME] sqlplus <catalog_user>/<pwd>@<catalog_TNS>

SQL*Plus: Release 12.2.0.1.0 Production on Thu Aug 16 14:50:08 2018

Connected to:

Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production

SQL> select * from rcver;

VERSION

------------

12.02.00.01

 

Database registration will then be successful:

RMAN> register database;

database registered in recovery catalog

starting full resync of recovery catalog

full resync complete
 

Cet article RMAN catalog upgrade, why, when and how est apparu en premier sur Blog dbi services.

[BLOG] Certification for EBS (R12) on Cloud for Oracle Apps DBAs

Online Apps DBA - Thu, 2018-08-23 02:13

Do you want to know about Which Cloud Certification is for EBS(R12) on Cloud? [BLOG] Certification for EBS (R12) on Cloud for Oracle Apps DBAs Visit: https://k21academy.com/ebscloud24 to get the answer. Do you want to know about Which Cloud Certification is for EBS(R12) on Cloud? [BLOG] Certification for EBS (R12) on Cloud for Oracle Apps […]

The post [BLOG] Certification for EBS (R12) on Cloud for Oracle Apps DBAs appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Mobile Approvals 1.7 Now Available for iOS and Android

Steven Chan - Wed, 2018-08-22 12:05

We are pleased to announce updates for the Oracle E-Business Suite Mobile Approvals 1.7 smartphone app for iOS and Android. These updates are delivered as part of Oracle E-Business Suite Mobile Release 8, which supports both Oracle E-Business Suite Release 12.1.3 and 12.2.3 and beyond.  Oracle E-Business Suite Mobile Release 8 is a coordinated release of 15 Oracle E-Business Suite mobile apps. 

For information on the Approvals app, see:

For more details on the updates for all EBS mobile apps, see:

What's New in Mobile Approvals 1.7

  • Ability to distinguish approvals submitted through mobile apps from those submitted through other sources, such as the Worklist pages or email
  • Enhancement to prevent attachments viewed within the app from being shared elsewhere on iOS devices
  • Mobile Foundation Updates
    • For apps installed through an Enterprise Mobility Management (EMM) solution's app catalog, support for having the EBS Server URL preconfigured by an administrator rather than requiring users to enter it after launching the app
  • Technical updates with uptake of Oracle Mobile Application Framework (MAF) 2.5.0

Related Articles

Categories: APPS Blogs

Doing joins on denormalized data in tables

Tom Kyte - Wed, 2018-08-22 10:26
Hi Team RDBMS version: 12.2 OEL: 7.5 Trying to evaluate new table structure to query data, which I've heard is very widely used these days in e-commerce companies at scale. Though most of those companies might be using NoSQL solutions for the ...
Categories: DBA Blogs

Get difference between 2 date columns in HH:MM:SS

Tom Kyte - Wed, 2018-08-22 10:26
I am trying to get difference between 2 date columns in HH:MM:SS. Currently I have the below query which is returning the difference in HH:MM:SS. The problem with the below query is even if there is a difference of 1 day and 13 hours, it is just givi...
Categories: DBA Blogs

Documentation claims that multiset conditions also work with varrays, which they don't

Tom Kyte - Wed, 2018-08-22 10:26
According to the documentation, a lot of multiset conditions should also work with varrays, which they don't (at least not in the SQL language): https://docs.oracle.com/database/121/SQLRF/conditions006.htm#SQLRF52138 I've created a Stack Overflow...
Categories: DBA Blogs

QUERY CLAUSE for 2 join tables

Tom Kyte - Wed, 2018-08-22 10:26
Hi Tom, I have 2 SQL scripts giving lots of rows :- ---Export Data for fy17 filename: w_camp_hist_f_2016.dmp select count(*) from olap.w_camp_hist_f,olap.w_day_d where w_camp_hist_f.ld_dt_wid=w_day_d.row_wid and w_day_d.petc_year = 2016; ...
Categories: DBA Blogs

Question from Office Hours on Data Visualization with Oracle APEX

Tom Kyte - Wed, 2018-08-22 10:26
Hi, Just wanted to check if there will there a recording of this available ?
Categories: DBA Blogs

Oracle Expands Challenger Series with New Houston Event at Rice University

Oracle Press Releases - Wed, 2018-08-22 09:00
Press Release
Oracle Expands Challenger Series with New Houston Event at Rice University

Redwood Shores, Calif.—Aug 22, 2018

Continuing its support for American tennis, Oracle announced today it is adding a Houston event to the Oracle Challenger Series at the George R. Brown Tennis Center at Rice University on November 10-18, 2018 in conjunction with the Association of Tennis Professionals (ATP) and the Women’s Tennis Association (WTA).

The Oracle Challenger Series launched earlier this year with events in Newport Beach and Indian Wells, Ca., with the mission of providing new opportunities for up-and-coming American tennis players to secure both ranking points and prize money.

“Houston is a terrific addition to the Oracle Challenger Series, coming shortly after the event we announced in Chicago for September,” said Oracle CEO Mark Hurd. “Tennis players in the United States need more chances to compete at home and more chances to earn a living from the sport. Oracle is committed to creating more Challenger events to give the nation’s best players the opportunities to be successful. We’re also hosting the events at some of the best venues in the country where fans can get an up-close look at some outstanding tennis.”

The Houston tournament will be a joint ATP Challenger Tour/WTA 125K Series event and pay equal prize money ($150,000 per Tour) for a total of $300,000. Both the women’s and men’s draws will consist of 32 singles players, 16 qualifying players and 16 doubles teams. The event will be free and open to the public.

“We’re thrilled to welcome the Oracle Challenger Series to the George R. Brown Tennis Center in Houston this November,” said Efe Ustundag, Head Men’s Tennis Coach at Rice University. “Having a combined professional tournament at our venue is a terrific opportunity to continue growing the sport in our city, and we look forward to welcoming all of the players and fans for a fantastic event,” added Elizabeth Schmidt, Head Women’s Tennis Coach.

The 2018-2019 Series will begin in Chicago (September 2-9, 2018) and continue in Houston, culminating at the 2019 BNP Paribas Open, the largest ATP World Tour and WTA combined two-week event in the world held annually at the Indian Wells Tennis Garden. The two American women and two American men who accumulate the most points over the course of the Series will receive wild cards into their respective singles main draws in Indian Wells.

The Oracle Challenger Series also looks to make a positive impact on the communities where its events are held, and will be donating $5,000 to the local Houston chapter of the National Junior Tennis and Learning (NJTL) network. The NJTL provides free or low-cost tennis and education programming to more than 225,000 under-resourced youth in the United States.

The Oracle Challenger Series builds on Oracle’s commitment to support U.S. tennis at both the professional and collegiate level. Oracle sponsors the Oracle US Tennis Awards, two $100,000 grants awarded annually to assist young players as they transition from college into the professional ranks. In addition to sponsoring the Intercollegiate Tennis Association rankings, Oracle also hosts the Oracle ITA Masters tournament in Malibu, California and the Oracle ITA National Fall Championships which will be held at the Surprise Tennis Center in Surprise, Arizona in 2018.

For more information about the Oracle Challenger Series, visit oraclechallengerseries.com.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
About the Oracle Challenger Series

The Oracle Challenger Series, which was created to offer American tennis players unparalleled opportunities to secure both prize money and ranking points, builds on Oracle’s commitment to help support U.S. tennis for men and women at both the collegiate and professional level. The Series features equal prize money in a groundbreaking tournament format that combines the ATP Challenger Tour and WTA 125K Series. 

The Series offers an unmatched potential prize of wild cards into the main draw of the BNP Paribas Open, widely considered the top combined ATP World Tour and WTA professional tennis tournament in the world, for the top two American male and female finishers.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

NetSuite Expands Alliance Partner Program to Support Growing Customer Demand

Oracle Press Releases - Wed, 2018-08-22 08:00
Press Release
NetSuite Expands Alliance Partner Program to Support Growing Customer Demand Kodella, Riveron and ITOrizon to Launch Cloud ERP Practices with NetSuite

San Mateo, Calif.—Aug 22, 2018

Oracle NetSuite today announced three new members have joined the NetSuite Alliance Partner Program to meet the growing demand for cloud business management solutions. As part of the NetSuite Alliance Partner Program, Kodella, Riveron and ITOrizon will launch new cloud ERP practices to help customers across industries improve operational efficiency and accelerate growth.

“Kodella, Riveron and ITOrizon bring a wealth of industry knowledge and a successful history of providing customers with the advice and solutions that best fit individual business needs,” said Craig West, Vice President of Alliances and Channels, Oracle NetSuite. “We’re excited to add them to our partner program and look forward to working closely together to provide customers with the expertise they need to successfully grow, scale and adapt to change.”

Kodella Builds Cloud ERP Practice for Southern California Manufacturers and Distributors

Kodella LLC (https://www.kodella.com), a Newport Beach-based NetSuite development firm, is launching a dedicated NetSuite practice to help manufacturers and warehouse distributors across Southern California scale, improve workflows and maximize return on investment. With more than 20 years of experience in the software industry, Kodella differentiates itself by providing a robust portfolio of resources and customizations that meet the specific business needs of its customers.

“Manufacturers and distributors are increasingly looking for solutions to streamline the management of financials, inventory and reporting in order to grow their businesses,” said Ryan Albretsen, Kodella’s Chief Customer Officer. “The opportunity to work closely with NetSuite to support our customers’ growth and success is one we’re very excited about. We’re pairing the best in cloud ERP with Kodella’s high-end, talented resources to help our customers drive maximum value.”

Riveron Launches Cloud ERP Practice for Private Equity Firms, Corporations and Lenders Across Various Industries

Riveron (https://riveronconsulting.com/), a business advisory firm, has launched a dedicated cloud-based ERP practice, helping their customers successfully leverage technologies in order to achieve diverse and evolving goals. Riveron provides its customers with a unique combination of consulting, public accounting and industry experience.

“Every industry has unique business needs and objectives they are trying to achieve. Delivering accurate data and supporting analysis is critical for high-performing organizations to remain competitive. We are committed to enabling technologies that provide the greatest return on investment,” said Ryan Senter, Executive Managing Director, Riveron. “By combining business process leading practices with Riveron’s extensive technical acumen around systems like NetSuite, our customers can streamline and scale their businesses like never before.”

ITOrizon to Deliver Cloud ERP to Fast Growing Companies Across the Globe

ITOrizon (www.itorizon.com), an Atlanta-based fast-growing IT services company, is building a cloud ERP practice to promote, implement, and customize NetSuite for its global customers in various industries, including retail, manufacturing and banking.

"Global companies face increasing operating complexities with foreign currencies, taxation, and reporting and compliance guidelines,” said Shan Muthuvelu, President of ITOrizon. “By joining NetSuite’s partner program, we are delivering a solution that helps our global customer base address these complexities. Our customers know they can rely on us to select the right technologies that will provide a competitive edge and accelerate their growth."

Contact Info
Danielle Tarp
Oracle NetSuite Corporate Communications
650-506-2905
danielle.tarp@oracle.com
About NetSuite Alliance Partner Program

The NetSuite Alliance Partner program provides business transformation consulting services as well as integration and implementation services that help customers get even more value from their NetSuite software. Alliance Partners are experts in their field and have a deep and unique understanding of NetSuite solutions. NetSuite provides Alliance Partners with a robust set of resources, certified training, and tools, enabling them to develop expertise around specific business functions, product areas, and industries so they can efficiently assist customers, differentiate their practices, and grow their business.

For more information, please visit http://www.netsuite.com/portal/partners/alliance-partner-program.shtml.

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 40,000 organizations and subsidiaries in 199 countries and territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • 650-506-2905

Pages

Subscribe to Oracle FAQ aggregator