Feed aggregator

Creating and Using a Parcel Repository for Cloudera Manager

Yann Neuhaus - Wed, 2018-07-11 07:59

This blog post describes how to create a hosted Cloudera repository and use it in your Cloudera Manager deployment.

The first step is to install a web server, which will host RPM packages and repodata. The common way, is to use an Apache web server.

Installing Apache HTTPD service
[cdhtest@edge ]$ sudo yum install httpd -y

 

Starting Apache HTTPD service
[cdhtest@edge ]$ sudo systemctl start httpd

Verify that the service has been started properly.

[cdhtest@master html]$ sudo systemctl status httpd
* httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2018-07-11 09:16:45 UTC; 1h 26min ago
     Docs: man:httpd(8)
           man:apachectl(8)
 Main PID: 53284 (httpd)
   Status: "Total requests: 40; Current requests/sec: 0; Current traffic:   0 B/sec"
   CGroup: /system.slice/httpd.service
           |-53284 /usr/sbin/httpd -DFOREGROUND
           |-53285 /usr/sbin/httpd -DFOREGROUND
           |-53286 /usr/sbin/httpd -DFOREGROUND
           |-53287 /usr/sbin/httpd -DFOREGROUND
           |-53288 /usr/sbin/httpd -DFOREGROUND
           |-53289 /usr/sbin/httpd -DFOREGROUND
           |-53386 /usr/sbin/httpd -DFOREGROUND
           |-53387 /usr/sbin/httpd -DFOREGROUND
           |-53388 /usr/sbin/httpd -DFOREGROUND
           `-58024 /usr/sbin/httpd -DFOREGROUND

Jul 11 09:16:45 master systemd[1]: Starting The Apache HTTP Server...
Jul 11 09:16:45 master httpd[53284]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 192.168.2.6. Set the 'ServerName' directive globally to suppress this message
Jul 11 09:16:45 master systemd[1]: Started The Apache HTTP Server.

 

Downloading Parcels for CDH5 and Publishing files

Download parcels according to your OS distribution for CDH5 (latest release) from the below link:

https://archive.cloudera.com/cdh5/parcels/latest/

Download the 2 files:

  • .parcels
  • manifest.json

Before downloading the files, create the CDH parcel directory tree in your web server.

[cdhtest@master html]$ cd /var/www/html/
[cdhtest@master html]$ sudo mkdir -p cdh5.15/
[cdhtest@master html]$ sudo chmod -R ugo+rX /var/www/html/cdh5.15/
[cdhtest@master html]$ cd /var/www/html/cdh5.15/
[cdhtest@master cdh5.15]$ sudo wget https://archive.cloudera.com/cdh5/parcels/latest/CDH-5.15.0-1.cdh5.15.0.p0.21-el5.parcel https://archive.cloudera.com/cdh5/parcels/latest/manifest.json
--2018-07-11 12:16:04--  https://archive.cloudera.com/cdh5/parcels/latest/CDH-5.15.0-1.cdh5.15.0.p0.21-el5.parcel
Resolving archive.cloudera.com (archive.cloudera.com)... 151.101.32.167
Connecting to archive.cloudera.com (archive.cloudera.com)|151.101.32.167|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1675168741 (1.6G) [binary/octet-stream]
Saving to: 'CDH-5.15.0-1.cdh5.15.0.p0.21-el5.parcel'

100%[==================================================================================================================================================================================================================================================================================================>] 1,675,168,741 53.2MB/s   in 29s

2018-07-11 12:16:32 (56.0 MB/s) - 'CDH-5.15.0-1.cdh5.15.0.p0.21-el5.parcel' saved [1675168741/1675168741]

--2018-07-11 12:16:32--  https://archive.cloudera.com/cdh5/parcels/latest/manifest.json
Reusing existing connection to archive.cloudera.com:443.
HTTP request sent, awaiting response... 200 OK
Length: 74072 (72K) [application/json]
Saving to: 'manifest.json'

100%[====================================================================================================================================================================================================================================================================================================>] 74,072      --.-K/s   in 0s

2018-07-11 12:16:32 (225 MB/s) - 'manifest.json' saved [74072/74072]

FINISHED --2018-07-11 12:16:32--
Total wall clock time: 29s
Downloaded: 2 files, 1.6G in 29s (56.0 MB/s)
[cdhtest@master cdh5.15]$
[cdhtest@master cdh5.15]$ ll
total 1635984
-rw-r--r-- 1 root root 1675168741 Jun 14 18:06 CDH-5.15.0-1.cdh5.15.0.p0.21-el5.parcel
-rw-r--r-- 1 root root      74072 Jun 14 18:08 manifest.json

 

Your Parcels remote repository is now available

CM_Parcels8

 

Configuring the Cloudera Manager Server to Use the Parcel URL for Hosted Repositories

1. In the Cluster Installation – Select Repository step

Click on More Options.

CM_Parcels3

Add your Remote Parcel Repository URL .

CM_Parcels4

Then Cloudera Manager will download, distribute, unpack and activate parcels for all cluster hosts.

CM_Parcels5

 

2. You can also configure your local parcels repository in the Cloudera Manager Configuration Menu.

Click on Administration Menu > Settings

Click on Parcels category > Add your Remote Parcel Repository URL here

CM_Parcels6

Click Save Changes to commit the changes.

 

Activate Parcels

In the Cloudera Manager Parcels page,

Click on Check for New Parcels

Click on Download, Distribute, Activate buttons for the parcels found.

CM_Parcels7

 

Cet article Creating and Using a Parcel Repository for Cloudera Manager est apparu en premier sur Blog dbi services.

Is it Possible to Audit Manual Partition Creation Specifically?

Tom Kyte - Tue, 2018-07-10 16:06
Are you aware of any way to specifically audit the addition of new partitions? So far my searching has come up fruitless. Auditing is enabled within the DB which records each <code>alter table</code> command. However, that is too large a net. The spe...
Categories: DBA Blogs

Passing partition name dynamically to get records of a specific partitions from a partitioned table

Tom Kyte - Tue, 2018-07-10 16:06
<code> CREATE TABLE TOM_RESER_DERESERVATION ( ELEMENT_id VARCHAR2(200 BYTE), ELEMENT_LABEL VARCHAR2(200 BYTE), PRODUCT_NAME VARCHAR2(100 BYTE), CIRCLE VARCHAR2(100 BYTE), COUNTRY VA...
Categories: DBA Blogs

Contextual Chatbot with TensorFlow, Node.js and Oracle JET - Steps How to Install and Get It Working

Andrejus Baranovski - Tue, 2018-07-10 12:15
Blog reader was asking to provide a list of steps, to guide through install and run process for chatbot solution with TensorFlow, Node.JS and Oracle JET.

Resources:

1. Chatbot UI and context handling backend implementation - Machine Learning Applied - TensorFlow Chatbot UI with Oracle JET Custom Component

2. Classification implementation - Classification - Machine Learning Chatbot with TensorFlow

3. TensorFlow installation - TensorFlow - Getting Started with Docker Container and Jupyter Notebook

4. Source code - GitHub

Install and run steps:

1. Download source code from GitHub repository:


2. Install TensorFlow and configure Flask (TensorFlow Linear Regression Model Access with Custom REST API using Flask)

3. Upload intents.json file to TensorFlow root folder:


4. Upload both TensorFlow notebooks:


5. Open and execute (click Run for each section, step by step) model notebook:


6. Repeat training step few times, to get minimum loss:


7. Open and execute response notebook:


8. Make sure REST interface is running, see message below:


9. Test classification from external REST client:


10. Go to socketioserver folder and run (install Node.js before that) npm install express --save and npm install socket.io --save commands:


11. Run npm start to startup Node.js backend:


12. Go to socketiojet folder and run (install Oracle JET before that) ojet restore:


13. Run ojet serve to start chatbot UI. Type questions to chatbot prompt:

Updates about the “Spectre” series of processor vulnerabilities and CVE-2018-3693

Oracle Security Team - Tue, 2018-07-10 12:06

A new processor vulnerability was announced today. Vulnerability CVE-2018-3693 (“Bounds Check Bypass Store” or BCBS) is closely related to Spectre v1. As with previous iterations of Spectre and Meltdown, Oracle is actively engaged with Intel and other industry partners to develop technical mitigations against this processor vulnerability.

Note that many industry experts anticipate that a number of new variants of exploits leveraging these known flaws in modern processor designs will continue to be disclosed for the foreseeable future. These issues are likely to primarily impact operating systems and virtualization platforms, and may require software update, microcode update, or both. Fortunately, the conditions of exploitation for these issues remain similar: malicious exploitation requires the attackers to first obtain the privileges required to install and execute malicious code against the targeted systems.

In regard to vulnerabilities CVE-2018-3640 (“Spectre v3a”) and CVE-2018-3639 (“Spectre v4”), Oracle has determined that the SPARC processors manufactured by Oracle (i.e., SPARC M8, T8, M7, T7, S7, M6, M5, T5, T4, T3, T2, T1) are not affected by these variants. In addition, Oracle has delivered microcode patches for the last 4 generations of Oracle x86 Servers.

As with previous versions of the Spectre and Meltdown vulnerabilities (see MOS Note ID 2347948.1), Oracle will publish information about these issues on My Oracle Support.

Oracle Database Vault: Realm in a Pluggable Database

Yann Neuhaus - Tue, 2018-07-10 11:03

Database Vault can also be used in a multitenant environment. In a multitenant environment we must register Oracle Database Vault in the root first, then after in the PDBs.
In this blog we will see how we can use realms to protect data in a pluggable database 12.1.

In CDB$ROOT we have to create common accounts that will be used for the Database Vault Owner (DV_OWNER role) and Database Vault Account Manager (DV_ACCTMGR role) accounts. It is also recommended to create a backup for each user.

SQL> conn sys as sysdba
Enter password:
Connected.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>

SQL> GRANT CREATE SESSION, SET CONTAINER TO c##dbv_owner_root IDENTIFIED BY root CONTAINER = ALL;

SQL> GRANT CREATE SESSION, SET CONTAINER TO c##dbv_acctmgr_root IDENTIFIED BY root CONTAINER = ALL;

SQL> grant select any dictionary to C##DBV_OWNER_ROOT;

Grant succeeded.

SQL> grant select any dictionary to C##DBV_ACCTMGR_ROOT;

Grant succeeded.

SQL>

The next step is configure Database Vault user account on CDB$ROOT

BEGIN
 DVSYS.CONFIGURE_DV (
   dvowner_uname         => 'c##dbv_owner_root',
   dvacctmgr_uname       => 'c##dbv_acctmgr_root');
 END;
  6  /

PL/SQL procedure successfully completed.

SQL> @?/rdbms/admin/utlrp.sql

We can after enable Oracle Database Vault with user c##dbv_owner_root in CDB$ROOT

SQL> conn c##dbv_owner_root/root
Connected.
SQL> show user
USER is "C##DBV_OWNER_ROOT"
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>

SQL> EXEC DBMS_MACADM.ENABLE_DV;

PL/SQL procedure successfully completed.

SQL>

After restart o CDB$ROOT , we can verify the status. These queries should return TRUE.

SQL> SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

VALUE
----------------------------------------------------------------
TRUE

SQL> SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Label Security';

VALUE
----------------------------------------------------------------
TRUE

SQL>  SELECT * FROM DVSYS.DBA_DV_STATUS;

NAME                STATUS
------------------- ----------------------------------------------------------------
DV_CONFIGURE_STATUS TRUE
DV_ENABLE_STATUS    TRUE

SQL>

At PDB level, we must register common users we created earlier. In this example I am using a pluggable database named PDB1.

SQL> show user
USER is "SYS"
SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> GRANT CREATE SESSION, SET CONTAINER TO c##dbv_owner_root CONTAINER = CURRENT;

Grant succeeded.

SQL> GRANT CREATE SESSION, SET CONTAINER TO c##dbv_acctmgr_root CONTAINER = CURRENT;

Grant succeeded.

SQL>

SQL> grant select any dictionary to C##DBV_OWNER_ROOT;

Grant succeeded.

SQL> grant select any dictionary to C##DBV_ACCTMGR_ROOT;

Grant succeeded.

SQL>

Like in CDB$ROOT we also have to configure the Database Vault Users in PDB1

SQL> show user
USER is "SYS"
SQL> show con_name

CON_NAME
------------------------------
PDB1


SQL> BEGIN
 DVSYS.CONFIGURE_DV (
   dvowner_uname         => 'c##dbv_owner_root',
   dvacctmgr_uname       => 'c##dbv_acctmgr_root');
 END;
  6  /

PL/SQL procedure successfully completed.

SQL>

SQL> @?/rdbms/admin/utlrp.sql

And now let’s enable Oracle Database Vault on PDB1

SQL> show user
USER is "C##DBV_OWNER_ROOT"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> EXEC DBMS_MACADM.ENABLE_DV;

PL/SQL procedure successfully completed.

SQL>

With SYS let’s restart PDB1

SQL> show user
USER is "SYS"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

As in CDB$ROOT we can verify

SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

VALUE
----------------------------------------------------------------
TRUE

SQL> SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Label Security';

VALUE
----------------------------------------------------------------
TRUE

SQL> SELECT * FROM DVSYS.DBA_DV_STATUS;

NAME                STATUS
------------------- ----------------------------------------------------------------
DV_CONFIGURE_STATUS TRUE
DV_ENABLE_STATUS    TRUE

SQL>

Now that the Database vault is configured, we can create A REALM to protect our DATA. In this example we are protecting data of the SCOTT table EMP. We are using EM 12c to create the REAM.
From Database Home select Security and then Database Vault
dbvault1
In the Database vault page log with any user having appropriate privileges: DV_OWNER or DV_ADMIN role, SELECT ANY DICTIONARY
dbvault2
Before creating the REALM we can verify that user SYSTEM access to table SCOTT.EMP

SQL> show user
USER is "SYSTEM"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> select * from scott.emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL>

Under Administration Tab, Select Reams
dbvault3
And Click on Create
Give a name and a description for the realm
dbvault4
Click on Next
On the Realm Secured Objects click on Add
dbvault5
Click on OK
dbvault6
Click on Next
On Real Authorization page select ADD
dbvault7
Click on OK
dbvault8
Click Next
On the Review page Click Finish
dbvault9
And the end we should have
dbvault10
And that’s all. We can verify now that SYSTEM is no longer allowed to query SCOTT.EMP

SQL> conn system/root@pdb1
Connected.
SQL> show user
USER is "SYSTEM"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> select * from scott.emp;
select * from scott.emp
                    *
ERROR at line 1:
ORA-01031: insufficient privileges
SQL>

And that user EDGE is allowed to query SCOTT.EMP

SQL> show user
USER is "EDGE"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> select * from scott.emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL>

 

Cet article Oracle Database Vault: Realm in a Pluggable Database est apparu en premier sur Blog dbi services.

Elliptic Curve Cryptography Certificates Now Certified with EBS Release 12.1

Steven Chan - Tue, 2018-07-10 09:17

We are pleased to announce that Elliptic Curve Cryptography (ECC) certificates are now certified for use with Oracle E-Business Suite Release 12.1.

Key Points

Related Articles

References

Categories: APPS Blogs

Create an HDFS user’s home directory

Yann Neuhaus - Tue, 2018-07-10 07:07

Let’s assume we need to create an HDFS home directory for a user named “dbitest”.

We need first to verify if the user exists on the local filesystem. It’s important to understand that HDFS is mapping users from the local filesystem.

[cdhtest@master ~]$ cat /etc/passwd | grep dbitest
 Create a user on the local file system

When the user is not created, we can easily create one with it associated group.

[cdhtest@master ~]$ sudo groupadd dbitest

[cdhtest@master ~]$ sudo useradd -g dbitest -d/home/dbitest dbitest

[cdhtest@master ~]$ cat /etc/passwd | grep dbitest
dbitest:x:1002:1002::/home/dbitest:/bin/bash
[cdhtest@master ~]$

Note that, the user dbitest should be created in all cluster hosts.

Create a directory in HDFS for a new user

Then we can create the directory under /user in HDFS for the new user dbitest. This directory needs to be created using hdfs user as hdfs user is the super user for admin commands.

[cdhtest@master ~]$ sudo -u hdfs hdfs dfs -mkdir /user/dbitest

 

Verify the owner for our new directory
[cdhtest@master ~]$ sudo -u hdfs hdfs dfs -ls /user
Found 5 items
drwxr-xr-x   - hdfs   supergroup          0 2018-07-10 10:10 /user/dbitest
drwxrwxrwx   - mapred hadoop              0 2018-07-10 07:54 /user/history
drwxrwxr-t   - hive   hive                0 2018-07-10 07:55 /user/hive
drwxrwxr-x   - hue    hue                 0 2018-07-10 07:55 /user/hue
drwxrwxr-x   - oozie  oozie               0 2018-07-10 07:56 /user/oozie

The new home directory has been created but it’s owned by hdfs user.

Change owner for /user/dbitest directory

Use the below command to change the owner of the new user home directory created.

[cdhtest@master ~]$ sudo -u hdfs hdfs dfs -chown dbitest:dbitest /user/dbitest

Let’s see if the owner has changed.

[cdhtest@master ~]$ sudo -u hdfs hdfs dfs -ls /user
Found 5 items
drwxr-xr-x   - dbitest dbitest          0 2018-07-10 10:10 /user/dbitest
drwxrwxrwx   - mapred  hadoop           0 2018-07-10 07:54 /user/history
drwxrwxr-t   - hive    hive             0 2018-07-10 07:55 /user/hive
drwxrwxr-x   - hue     hue              0 2018-07-10 07:55 /user/hue
drwxrwxr-x   - oozie   oozie            0 2018-07-10 07:56 /user/oozie
Change permissions

Change the permissions of the newly created home directory so that no other users can have read, write and execute permissions except the owner.

[cdhtest@master ~]$ sudo -u hdfs hdfs dfs -chmod 700 /user/dbitest
[cdhtest@master ~]$ sudo -u hdfs hdfs dfs -ls /user
Found 6 items
drwxr-xr-x   - admins  cdhtest          0 2018-07-10 08:56 /user/cdhtest
drwx------   - dbitest dbitest          0 2018-07-10 10:10 /user/dbitest
drwxrwxrwx   - mapred  hadoop           0 2018-07-10 07:54 /user/history
drwxrwxr-t   - hive    hive             0 2018-07-10 07:55 /user/hive
drwxrwxr-x   - hue     hue              0 2018-07-10 07:55 /user/hue
drwxrwxr-x   - oozie   oozie            0 2018-07-10 07:56 /user/oozie

 

Test the user dbitest home directory

We can now test the user home directory creation by uploading data into it without specifying the destination directory. The file will be automatically uploaded to the user’s home directory if no destination is specified.

[cdhtest@master ~]$ sudo su dbitest
[dbitest@master ~]$ hdfs dfs -ls /user/dbitest
[dbitest@master ~]$ hdfs dfs -put HelloWorld.txt
[dbitest@master ~]$ hdfs dfs -ls /user/dbitest
Found 1 items
-rw-r--r--   3 dbitest dbitest         39 2018-07-10 10:30 /user/dbitest/HelloWorld.txt

 

Your user home directory has been created successfully.

 

Cet article Create an HDFS user’s home directory est apparu en premier sur Blog dbi services.

Oracle Recognized as a Leader in 2018 Gartner Magic Quadrant for Access Management, Worldwide

Oracle Press Releases - Tue, 2018-07-10 07:00
Press Release
Oracle Recognized as a Leader in 2018 Gartner Magic Quadrant for Access Management, Worldwide Also received highest score for Global-Enterprise Use Case in 2018 Gartner Critical Capabilities for Identity Governance and Administration

Redwood Shores Calif—Jul 10, 2018

Oracle today announced that it has been named a Leader in Gartner’s 2018 Magic Quadrant for Access Management, Worldwide1. It was also given the highest product score in the Global-Enterprise Use Case in Gartner’s 2018 Critical Capabilities for Identity Governance and Administration2. Additionally, earlier this year, Oracle was named a Leader in Gartner’s 2018 Magic Quadrant for Identity Governance and Administration report for the fifth consecutive time.3

Today’s advanced security threats require proactive and intelligent automation that can quickly scale with a rapidly changing IT environment. Oracle’s security portfolio provides a comprehensive set of integrated solutions designed to help reduce the time required to respond to security threats. Oracle helps organizations predict, prevent, detect, and respond to the overwhelming number of security events that challenge IT and security operations. Oracle employs a multilayered security approach that helps organizations secure their entire cloud, including users, apps, data, and infrastructure.

“The rapid growth of users, applications, data, and infrastructure makes it hard for organizations to keep pace with the multitude of security threats,” said Eric Olden, senior vice president and general manager, security and identity, Oracle. “Using state-of-the-art machine learning and adaptive security, coupled with identity governance, Oracle’s comprehensive security platform is designed to help organizations protect data, including by limiting who has access to that data.”

According to Gartner, “Leaders in the access management market generally have significant customer bases. They provide feature sets that are appropriate for current customer use-case needs. Leaders also show evidence of strong vision and execution for anticipated requirements related to technology, methodology or means of delivery; and they show evidence of how access management plays a role in a collection of related or adjacent product offerings. Leaders typically demonstrate solid customer satisfaction with overall access management capabilities, the sales process, and/or related service and support.”

The Critical Capabilities for Identity Governance and Administration report gave Oracle the highest score for the Global-Enterprise use case. Gartner states that IGA tools “help organizations control access risks by managing user accounts and entitlements in infrastructure systems and applications enterprisewide.” The assessment highlighted 16 vendors. Gartner evaluated them against 11 critical capabilities and 4 uses cases. The capabilities were Access Certification, Access Requests, Auditing, Ease of Deployment, Entitlements Management, Fulfillment, Identity Life Cycle, Policy and Role Management, Reporting and Analytics, Scalability and Performance, and Workflow.

Download a copy of Gartner’s 2018 Magic Quadrant for Access Management, Worldwide report here.

Download a copy of Gartner’s 2018 Critical Capabilities for Identity Governance and Administration report here.

Download a copy of Gartner’s Gartner’s 2018 “Magic Quadrant for Identity Governance and Administration” here.

1. Source: Gartner, Magic Quadrant for Access Management, Worldwide, Gregg Kreizman, 18 June 2018
2. Source: Gartner, Critical Capabilities for Identity Governance and Administration, Brian Iverson, Kevin Kampman, Felix Gaehtgens, 5 June 2018
3. Source: Gartner, Magic Quadrant for Identity Governance and Administration, Felix Gaehtgens, Kevin Kampman, Brian Iverson, 21 February 2018

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Jesse Caputo
Oracle
+1.650.506.5967
jesse.caputo@oracle.com
Kristin Reeves
Blanc & Otus
+1.925.787.6744
kreeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jesse Caputo

  • +1.650.506.5967

Kristin Reeves

  • +1.925.787.6744

Oracle Recognized as a Leader for Ninth Consecutive Time in Gartner Magic Quadrant for Digital Commerce

Oracle Press Releases - Tue, 2018-07-10 07:00
Press Release
Oracle Recognized as a Leader for Ninth Consecutive Time in Gartner Magic Quadrant for Digital Commerce Oracle Placed in Furthest Overall Position for Completeness of Vision in the Entire Magic Quadrant

Redwood Shores, Calif.—Jul 10, 2018

Oracle has been placed in the furthest overall position for its completeness of vision within the entire Magic Quadrant in Gartner’s 2018 “Magic Quadrant for Digital Commerce”1, which evaluated Oracle, including its Oracle Commerce Cloud. This is the ninth consecutive time that Oracle has been named a Leader in this Magic Quadrant. Oracle believes that its Leader position underscores its continued innovation in developing a modern, end-to-end digital commerce solution.

Gartner positions vendors within a particular quadrant based on their ability to execute and completeness of vision. According to Gartner, “In this fast-paced digital world, change comes quickly. This makes it paramount for vendors to understand not only the emerging market, but their clients' specific needs when it comes to offering strategy and business models. Likewise, innovation is imperative. Innovative vendors that demonstrate an understanding of the market in their offering (product) strategies and emerging business models exhibit Completeness of Vision. As a result, market understanding, offering (product) strategy, business model and innovation are all highly weighted criteria.”

This Gartner Magic Quadrant is a companion of Gartner’s “Critical Capabilities for Digital Commerce”2, in which Oracle Commerce Cloud received the highest score for Fast, Nimble Implementations use case and Oracle Commerce received the highest score for the Robust / Global Implementations use case. Oracle Commerce Cloud also scored 4.13 out of 5 for the Modular, Flexible Implementations use case.

“We are proud to be named a Leader again in the Gartner Magic Quadrant for Digital Commerce and to receive such scores for Oracle Commerce Cloud in the Critical Capabilities document. We believe it demonstrates our unrelenting dedication to innovation in digital commerce and our investment and momentum in the cloud,” said Ken Volpe, senior vice president, product development, Oracle. “We know it is critical for commerce companies to provide a seamless, end-to-end omnichannel experience to both consumers and business buyers. Oracle continues to invest in and develop the latest technology required for success in the next era of commerce, such as artificial intelligence (AI), real-time analytics, and emerging sources of data, to offer agile, scalable and data-driven digital commerce solutions that help businesses drive sales, customer loyalty and growth.”

Part of the Oracle Customer Experience (CX) Cloud, Oracle Commerce Cloud is the industry’s only unified enterprise-grade B2C and B2B commerce platform built on modern cloud architecture and deep industry expertise. With a proven heritage for performance, scalability and flexibility, Oracle Commerce Cloud empowers online businesses to take advantage of AI-powered capabilities and innovative personalization tools to turn static journeys into smart ones by delivering targeted product and content that is most relevant to the shopper’s immediate context. Recommendations utilize account data, shopper third-party data and real-time inputs to optimize outcomes and create superior consumer experiences for both first time and known shoppers to drive repeat visits, loyalty and revenue.

Part of Oracle Cloud Applications, Oracle Customer Experience (CX) Cloud Suite empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences, and outcomes, Oracle CX Cloud Suite helps customers reduce IT complexity, deliver innovative customer experiences, and achieve predictable and tangible business results.

Gartner positions vendors within a particular quadrant based on their ability to execute and completeness of vision. It says its evaluation criteria “emphasize the ability to develop, deploy and support a unique and compelling customer experience. They stress migration to more flexible and nimble implementation that reduces both time to market and TCO. They stress the vendor's ability to attract and develop an ecosystem of technology and service provider partners that add value to its platform.”

Download a complimentary copy of Gartner’s 2018 “Magic Quadrant for Digital Commerce” here.

[1] Gartner, “Magic Quadrant for Digital Commerce,” by Penny Gillespie, Jason Daigler, Mike Lowndes, Christina Klock, Yanna Dharmasthira, Sandy Shen, June 5, 2018. The report was previously titled Magic Quadrant for E-Commerce. In the 2010, 2008, 2007/2006 versions of the report, Oracle was listed as ATG because it acquired the company in November 2010.

[2] Gartner, “Critical Capabilities for Digital Commerce,” by Jason Daigler, Yanna Dharmasthira, Sandy Shen, Penny Gillespie, Mike Lowndes, Christina Klock, June 6. 2018

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Danielle Cormier-Smith
Oracle
+1.914.441.4896
danielle@positive-comms.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates.

Talk to a Press Contact

Danielle Cormier-Smith

  • +1.914.441.4896

Validate FK

Jonathan Lewis - Tue, 2018-07-10 04:42

A comment arrived yesterday on an earlier posting about an enhancement to the truncate command in 12c that raised the topic of what Oracle might do to validate a foreign key constraint. Despite being sure I had the answer written down somewhere (maybe on a client site or in a report to a client) I couldn’t find anything I’d published about it, so I ran up a quick demo script to show that all Oracle does is construct a simple SQL statement that will do check the data – and then do whatever the optimizer does to produce the fastest possible plan.

Here’s the script – with a few variations to show what happens if you start tweaking features to change the plan.

rem
rem     Script:         validate_fk.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem

create table parent 
as 
select  * 
from    all_Objects 
where   rownum <= 10000 -- > comment to avoid wordpress format issue
;

alter table parent add constraint par_pk primary key(object_id);

execute dbms_stats.gather_table_stats(null, 'parent', cascade=>true)


create table child 
as 
select par.* 
from    (select rownum from dual connect by level <= 10) v1, --> comment to avoid wordpress format issue
        parent par
; 

alter table child add constraint chi_fk_par foreign key(object_id) references parent enable novalidate; 
create index chi_fk_par on child(object_id); 
execute dbms_stats.gather_table_stats(null, 'child', cascade=>true)


-- alter table child modify object_id null;
-- alter table child parallel(degree 8);
-- alter session set "_fast_full_scan_enabled" = FALSE;
-- alter session set "_optimizer_outer_to_anti_enabled" = false;

alter system flush buffer_cache;

alter session set events '10046 trace name context forever, level 12';
alter table child modify constraint chi_fk_par validate;
alter session set events '10046 trace name context off;

All I’ve done is created parent table with a primary key, and a child table with 10 rows per parent. I’ve created a foreign key constraint on the child table, enabled it (so future data will be checked) but not validated it (so there’s no enforced guarantee that the existing data is correct). Then I’ve issued a command to validate the foreign key.

The flush of the buffer cache is to allow me to see the I/O that takes place and will also (usually) let me see some if there are any strange issues due to any recursive SQL Oracle. As you can see I’ve also got a couple of commented commands that might cause a couple of variations in behaviour.

Here’s the critical content from the output of the trace file summary from tkprof (in versions from 11.2.0.4 to 12.2.0.1):


select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3
from
 "TEST_USER"."CHILD" A , "TEST_USER"."PARENT" B where( "A"."OBJECT_ID" is not
  null) and( "B"."OBJECT_ID" (+)= "A"."OBJECT_ID") and( "B"."OBJECT_ID" is
  null)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.01       0.02        241        373          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.02       0.02        241        373          0           0

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  NESTED LOOPS ANTI (cr=373 pr=241 pw=0 time=21779 us starts=1 cost=70 size=22000 card=1000)
    100000     100000     100000   INDEX FAST FULL SCAN CHI_FK_PAR (cr=224 pr=219 pw=0 time=17753 us starts=1 cost=32 size=1700000 card=100000)(object id 104840)
     10000      10000      10000   INDEX UNIQUE SCAN PAR_PK (cr=149 pr=22 pw=0 time=4494 us starts=10000 cost=0 size=49995 card=9999)(object id 104838)

As you can see, Oracle writes SQL for an outer join with an “is null” predicate on the outer table – which the optimizer converts to an anti-join, running a nested loop in this case. It’s an interesting little oddity that the code includes the predicate “A”.”OBJECT_ID” is not null given that the column is declared as not null – but this is presumably a developer deciding to re-use code even if it then includes a redundant predicate (which is effectively zero cost – since the optimizer can use transitive closure to eliminate it).

Given that Oracle has converted an outer join to an anti join I obviously had to check what would happen if I disabled this conversion by altering the “_optimizer_outer_to_anti_enabled” parameter to false. The optimizer obeyed the session setting with the following plan in the trace:

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  FILTER  (cr=373 pr=241 pw=0 time=226926 us starts=1)
    100000     100000     100000   NESTED LOOPS OUTER (cr=373 pr=241 pw=0 time=177182 us starts=1 cost=70 size=22000 card=1000)
    100000     100000     100000    INDEX FAST FULL SCAN CHI_FK_PAR (cr=224 pr=219 pw=0 time=40811 us starts=1 cost=32 size=1700000 card=100000)(object id 104848)
    100000     100000     100000    INDEX UNIQUE SCAN PAR_PK (cr=149 pr=22 pw=0 time=119363 us starts=100000 cost=0 size=49995 card=9999)(object id 104846)

The significant difference is in the CPU usage, of course, and to a degree the magnitude of the change is dictated by the pattern and distribution of the data. The number of CR gets hasn’t changed as the number of index probes jumps from 10,000 to 100,000 because Oracle will have pinned index blocks (There’s a very old article on my old website if you want to read more about buffer pins).

The original question was about the effect of a local session setting that disabled index fast full scans, and followed up with a question on parallelism. After seeing the effect of changing one optimizer parameter at the session level you probably won’t be surprised by the following two results.  First, when the only change I make is the setting of the “_index_fast_full_scan_enabled” parameter, and then when the only change is the declared parallelism of the child table.

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  MERGE JOIN ANTI (cr=240 pr=240 pw=0 time=120163 us starts=1 cost=247 size=22000 card=1000)
    100000     100000     100000   INDEX FULL SCAN CHI_FK_PAR (cr=218 pr=218 pw=0 time=20314 us starts=1 cost=222 size=1700000 card=100000)(object id 104852)
    100000     100000     100000   SORT UNIQUE (cr=22 pr=22 pw=0 time=81402 us starts=100000 cost=25 size=50000 card=10000)
     10000      10000      10000    INDEX FULL SCAN PAR_PK (cr=22 pr=22 pw=0 time=1185 us starts=1 cost=22 size=50000 card=10000)(object id 104850)


Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  PX COORDINATOR  (cr=15 pr=2 pw=0 time=722483 us starts=1)
         0          0          0   PX SEND QC (RANDOM) :TQ10001 (cr=0 pr=0 pw=0 time=0 us starts=0 cost=37 size=22000 card=1000)
         0          0          0    HASH JOIN ANTI BUFFERED (cr=0 pr=0 pw=0 time=0 us starts=0 cost=37 size=22000 card=1000)
         0          0          0     PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us starts=0 cost=32 size=1700000 card=100000)
         0          0          0      TABLE ACCESS FULL CHILD (cr=0 pr=0 pw=0 time=0 us starts=0 cost=32 size=1700000 card=100000)
         0          0          0     PX RECEIVE  (cr=0 pr=0 pw=0 time=0 us starts=0 cost=4 size=50000 card=10000)
         0          0          0      PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us starts=0 cost=4 size=50000 card=10000)
         0          0          0       PX SELECTOR  (cr=0 pr=0 pw=0 time=0 us starts=0)
         0          0          0        INDEX FAST FULL SCAN PAR_PK (cr=0 pr=0 pw=0 time=0 us starts=0 cost=4 size=50000 card=10000)(object id 104854)

In the first case my version of Oracle has switched to a merge anti-join with an index full scan (not FAST full scan). It’s interesting to note that the merge join anti hasn’t been as clever as the nested loop anti in avoiding probes of the second data source as it walks the foreign key index (note how starts=100000 in the SORT UNIQUE line).

In the second case all the work was done by the parallel query slaves – and the PX SELECTOR line tells you that this plan must have come from 12c. As you can see we’re still doing an anti-join but this time we do a parallel tablescan of the child table (as we haven’t enabled the index for parallel execution – if we had altered the index to parellel(degree 8) as well we would have seen a parallel index fast full scan instead of the parallel tablescan.)

Bottom line: the SQL executed to validate a foreign key constraint is essentially a join between the parent and child tables, Oracle will simply optimize that statement to the best of its abilities based on the current session settings. If you want to test on a clone (or accurate model) of the tables you may find that you can create an sql_patch that works (even though the necessary SQL will be optimised as SYS – though so far I’ve only tried this with a couple of variants of the parallel() hint on 12.2.0.1)

<h3>Footnote</h3>

If you were wondering what the three bind variables in the query were, this is the relevant extract from the 10046 trace file with bind variable tracing enabled:

 Bind#0
  oacdty=01 mxl=32(09) mxlc=00 mal=00 scl=00 pre=00
  oacflg=18 fl2=0001 frm=01 csi=873 siz=32 off=0
  kxsbbbfp=7f3fbd359c38  bln=32  avl=09  flg=05
  value="TEST_USER"
 Bind#1
  oacdty=01 mxl=32(05) mxlc=00 mal=00 scl=00 pre=00
  oacflg=18 fl2=0001 frm=01 csi=873 siz=32 off=0
  kxsbbbfp=7f3fbd359c00  bln=32  avl=05  flg=05
  value="CHILD"
 Bind#2
  oacdty=01 mxl=32(10) mxlc=00 mal=00 scl=00 pre=00
  oacflg=18 fl2=0001 frm=01 csi=873 siz=32 off=0
  kxsbbbfp=7f3fbd359bc8  bln=32  avl=10  flg=05
  value="CHI_FK_PAR"

The values are the owner, table, and constraint names. (Though you have to modify the code a little to show that the last one is the constraint name and not the index name).

 

 

Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 3

Yann Neuhaus - Tue, 2018-07-10 03:42

After the deployment step with Terraform and the configuration/installation with Ansible, we will continue the installation of our Cloudera cluster with Cloudera Manager.

By following the below steps you will see how to install CDH on our hosts using Cloudera Manager.

Connection

First, Login to Cloudera manager URL.

Cloudera-Manager

When you connect to C.M for the first time, you need to accept the Cloudera Terms and Conditions.

Cloudera-Manager-Conditions

Then choose your desired edition of Cloudera. For this blog post, we will use the Data Hub trial edition (60 days trial).

C.M-Edition

 

Hosts discovery

In this step, provide the IP or hostname of all cluster machines you want to use. To complete this steps check if your /etc/hosts file of each cluster hosts is properly defined.

Cloudera-Manager-Hosts

When all hosts are reachable by Cloudera Manager server,

CDH Installation

This step is about the version of CDH to install in your cluster.

C.M-Installation_Methods

Use parcels installation.

By default, the parcel directory is /opt/cloudera/parcels. A best practice is to have a separate filesystem for /opt (at least 15GB), in order to separate the Cloudera installation to the /root filesystem.

If you don’t have a specific filesystem for /opt, you will have some performance impact on your server.

Java automatic installation

Since we install Java JDK automatically and properly with Ansible, we don’t need to check the box for java installation. Please skip this step.

CM-Java_install

Account details

In this step, we will provide the user account information to Cloudera Manager in order to install all components needed in all cluster hosts.

It’s not recommended to give the root access to Cloudera Manager but a dedicated user with sudo access. For our example we will use the user created during the installation part: dbi, with it associated password.

CM-Users1

Cloudera Installation – Install Agents

In this steps, Cloudera Manager will install and configure cloudera-scm-agent in all cluster hosts.

CM-Install-Agents

 

Cloudera Installation – Parcels installation

After cloudera-scm-agent installation and configuration, Cloudera Manager will install the CDH Parcel and additional parcels on all cluster hosts.

CM-Parcels2

Cloudera Installation – Hosts Inspector

In this step, the host’s inspector will check all cluster hosts requirement and notify you if you have any problem.

Note that, you can go through all validation section to understand all pre-requisites for Cloudera installation. You will get to know the complete checklists items Cloudera use to validate your host’s cluster.

CM-Hosts-Inspector

You can ignore the warning for now and resolve them after the installation. Click on Finish button and go to the next step.

Cluster Setup – Select Services

In this step, choose your services to install. For our sandbox environment we will only install Core Hadoop first.

CM-Services

 

Cluster Setup – Customize Role Assignments

Assign roles by hosts and click on continue.

CM-Roles

 

Cluster Setup – Setup databases

In this step, setup the remote databases for hive metastore, hue, Cloudera reports manager and oozie server.

CM-Databases

Test the connection and click on Continue .

Cluster Setup – Review changes

Ensure that you use the /data directory previously created with Terraform and Ansible.

CM-ReviewsUsers

 

Cluster Setup – Start services

CM-StartServices

 

Congratulations your Cloudera cluster is now installed and configured!

CM-End

CM-Dashboard

 

 

Cet article Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 3 est apparu en premier sur Blog dbi services.

List of Top Wireframe Software for Desktop 2018

Nilesh Jethwa - Mon, 2018-07-09 23:08

Top 10 Wireframe Software for Desktop Wireframes are skeletal models used for planning the design, functionality, and structure of an application or website. It is like the blueprint of a project. Wireframes can be a simple drawing on a napkin … Continue reading ?

Credit: MockupTiger Wireframes

Partition query - limiting results

Tom Kyte - Mon, 2018-07-09 21:46
I have a situation where I am trying to determine the taxability of an invoiced line. If the invoiced line quantity is 6, for example, the detail lines should not exceed 6. The problem is that if one of the detailed lines causes the cumulative q...
Categories: DBA Blogs

COLUMNS to ROWS

Tom Kyte - Mon, 2018-07-09 21:46
Hi Team, Could you please have a look at below use case and help to form SQL/PLSQL using which I can get the below report.. Table: order_country : holds order id and country its belong. There can be 100 and more countries in that but for sampl...
Categories: DBA Blogs

Data Guard: always set db_create_file_dest on the standby

Yann Neuhaus - Mon, 2018-07-09 12:21

The file name convert parameters are not dynamic and require a restart of the instance. An enhancement request was filled in 2011. I mentioned recently on Twitter that it can be annoying with Active Data Guard when a file on the primary server is created on a path that has no file name conversion. However, Ian Baugaard mentioned that there is a workaround for this specific case because db_create_file_dest is dynamic:

I recall seeing a MOS note indicating the order of precedence when it comes to db_file_name_convert and db_create_file_dest. The latter wins and makes config much easier especially when using OMF and ASM

— Ian Baugaard (@IanBaugaard) July 5, 2018

I’ve quickly created a 18c Data Guard configuration on the Oracle Cloud DBaaS to test it and here it is.

In the primary database and the standby database, here are the datafiles:

RMAN> report schema;
 
Report of database schema for database with db_unique_name ORCL_01
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 830 SYSTEM YES /u02/app/oracle/oradata/ORCL/system01.dbf
3 510 SYSAUX NO /u02/app/oracle/oradata/ORCL/sysaux01.dbf
4 60 UNDOTBS1 YES /u02/app/oracle/oradata/ORCL/undotbs01.dbf
5 340 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/ORCL/pdbseed/system01.dbf
6 620 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/ORCL/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/app/oracle/oradata/ORCL/users01.dbf
8 200 PDB$SEED:UNDOTBS1 NO /u02/app/oracle/oradata/ORCL/pdbseed/undotbs01.dbf
12 340 PDB1:SYSTEM YES /u02/app/oracle/oradata/ORCL/PDB1/system01.dbf
13 620 PDB1:SYSAUX NO /u02/app/oracle/oradata/ORCL/PDB1/sysaux01.dbf
14 200 PDB1:UNDOTBS1 YES /u02/app/oracle/oradata/ORCL/PDB1/undotbs01.dbf
15 50 PDB1:USERS NO /u02/app/oracle/oradata/ORCL/PDB1/PDB1_users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /u04/app/oracle/oradata/temp/temp01.dbf
2 62 PDB$SEED:TEMP 32767 /u04/app/oracle/oradata/temp/pdbseed_temp012018-02-08_13-49-27-256-PM.dbf
4 62 PDB1:TEMP 32767 /u04/app/oracle/oradata/temp/temp012018-02-08_13-49-27-256-PM.dbf

The properties of the standby database define no DbFileNameConvert because the directory structure is supposed to be the same:

DGMGRL> show configuration
 
Configuration - fsc
 
Protection Mode: MaxPerformance
Members:
ORCL_01 - Primary database
ORCL_02 - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 45 seconds ago)
 
 
DGMGRL> show database verbose 'ORCL_02';
 
Database - ORCL_02
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 15.00 KByte/s
Active Apply Rate: 532.00 KByte/s
Maximum Apply Rate: 535.00 KByte/s
Real Time Query: ON
Instance(s):
ORCL
 
Properties:
DGConnectIdentifier = 'ORCL_02'
...
DbFileNameConvert = ''
LogFileNameConvert = 'dummy, dummy'
...
 
Log file locations:
Alert log : /u01/app/oracle/diag/rdbms/orcl_02/ORCL/trace/alert_ORCL.log
Data Guard Broker log : /u01/app/oracle/diag/rdbms/orcl_02/ORCL/trace/drcORCL.log
 
Database Status:
SUCCESS

You can see that Oracle defines a dummy log file name convert. This a good idea to avoid some RMAN duplicate issues.

On the standby server, I have no db_create_file_dest defined:

SQL> show parameter create%dest
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string
db_create_online_log_dest_1 string .
db_create_online_log_dest_2 string
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string

Note that the Oracle Cloud DBaaS defines it. I’ve reset it for the purpose of this demo.

New filesystem on Primary server only

I create a new filesystem on the primary server:

[root@DG-dg01 opc]# mkdir /DATA ; chown oracle:dba /DATA

I create a datafile on this new filesystem:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> create tablespace FRANCK datafile '/DATA/franck.dbf' size 100M;
Tablespace created.

The apply is stuck:

DGMGRL> show database 'ORCL_02';
 
Database - ORCL_02
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 2 seconds ago)
Apply Lag: 11 seconds (computed 2 seconds ago)
Average Apply Rate: 16.00 KByte/s
Real Time Query: OFF
Instance(s):
ORCL
 
Database Error(s):
ORA-16766: Redo Apply is stopped
 
Database Status:
ERROR

The standby alert.log shows the error about the impossibility to create the datafile:

2018-07-06T08:04:59.077730+00:00
Errors in file /u01/app/oracle/diag/rdbms/orcl_02/ORCL/trace/ORCL_pr00_29393.trc:
ORA-01274: cannot add data file that was originally created as '/DATA/franck.dbf'
2018-07-06T08:04:59.111881+00:00
Background Media Recovery process shutdown (ORCL)

db_file_name_convert

The first idea is to set a db_file_name_convert, however, this requires an instance restart, which means downtime when you have sessions on the Active Data Guard standby:

DGMGRL> edit database 'ORCL_02' set property DbFileNameConvert='/DATA,/u02/app/oracle/oradata/ORCL';
Warning: ORA-16675: database instance restart required for property value modification to take effect
 
Property "dbfilenameconvert" updated
 
DGMGRL> show database 'ORCL_02';
 
Database - ORCL_02
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 2 seconds ago)
Apply Lag: 3 minutes 32 seconds (computed 2 seconds ago)
Average Apply Rate: 16.00 KByte/s
Real Time Query: OFF
Instance(s):
ORCL
Warning: ORA-16675: database instance restart required for property value modification to
take effect
Warning: ORA-16714: the value of property DbFileNameConvert is inconsistent with the member setting
 
Database Error(s):
ORA-16766: Redo Apply is stopped
 
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
 
Database Status:
ERROR

db_create_file_dest

The solution is set db_create_file_dest which, on the standby, has higher priority than the convert:

SQL> alter system set db_create_file_dest='/u02/app/oracle/oradata';
System altered.

I restart the apply:

DGMGRL> edit database 'ORCL_02' set state=apply-on;
Succeeded.

No need to restart and future datafile creations will be created there. However, it is too late for this datafile as it has already been created as UNNAMED in the controlfile:

ORA-01186: file 18 failed verification tests
ORA-01157: cannot identify/lock data file 18 - see DBWR trace file
ORA-01111: name for data file 18 is unknown - rename to correct file
ORA-01110: data file 18: '/u01/app/oracle/product/18.0.0/dbhome_1/dbs/UNNAMED00018'

Manual CREATE DATAFILE

Then I must manually create it, but I cannot do that while I am in standby_file_management=auto:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> alter database create datafile '/u01/app/oracle/product/18.0.0/dbhome_1/dbs/UNNAMED00018' as '/u02/app/oracle/oradata/ORCL/franck.dbf';
alter database create datafile '/u01/app/oracle/product/18.0.0/dbhome_1/dbs/UNNAMED00018' as '/u02/app/oracle/oradata/ORCL/franck.dbf'
*
ERROR at line 1:
ORA-01275: Operation CREATE DATAFILE is not allowed if standby file management
is automatic.

This can be changed dynamically:

DGMGRL> edit database 'ORCL_02' set property StandbyFileManagement=manual;
Property "standbyfilemanagement" updated

And then the creation is possible:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> alter database create datafile '/u01/app/oracle/product/18.0.0/dbhome_1/dbs/UNNAMED00018' as new;
Database altered.

You can see that because I have defined db_create_file_dest, I don’t need to name the datafile and create it as OMF with the ‘new’ keyword.

Now I can start the apply and it will resolve the gap:

DGMGRL> edit database 'ORCL_02' set state=apply-on;
Succeeded.
 
DGMGRL> show database 'ORCL_02';
 
Database - ORCL_02
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 3 seconds ago)
Apply Lag: 0 seconds (computed 3 seconds ago)
Average Apply Rate: 22.00 KByte/s
Real Time Query: ON
Instance(s):
ORCL
Warning: ORA-16675: database instance restart required for property value modification to take effect
Warning: ORA-16714: the value of property DbFileNameConvert is inconsistent with the member setting
 
Database Status:
WARNING

Do not forget to put back standby_file_management”to auto:

DGMGRL> edit database 'ORCL_02' set property StandbyFileManagement=auto;
Property "standbyfilemanagement" updated

So, now that db_create_file_dest is set, new datafiles will be created automatically as OMF (Oracle Managed Files), without caring about file name conversion:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> alter tablespace FRANCK add datafile '/DATA/franck2.dbf' size 100M;
Tablespace altered.

This is confirmed from the standby alert.log:

(4):Datafile 19 added to flashback set
(4):Successfully added datafile 19 to media recovery
(4):Datafile #19: '/u02/app/oracle/oradata/ORCL_02/7050211FE75F26FAE05392781D0AADAA/datafile/o1_mf_franck_fmybw332_.dbf'

Conclusion

Always define db_create_file_dest in the standby database so that datafiles will be created. Better to have them at the wrong place rather than stopping the apply. And anyway, if you don’t like the OMF names, and you are at least in 12c Enterprise Edition, you can change their name later with online move:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> alter database move datafile '/u02/app/oracle/oradata/ORCL_02/7050211FE75F26FAE05392781D0AADAA/datafile/o1_mf_franck_fmybw332_.dbf' to '/u02/app/oracle/oradata/ORCL/franck2.dbf';
Database altered.

 

Cet article Data Guard: always set db_create_file_dest on the standby est apparu en premier sur Blog dbi services.

Docker: Networking with docker swarm: creating new subnets/gateways/...

Dietrich Schroff - Mon, 2018-07-09 08:31
In this posting i explained how to configure the network for a container on a docker machine.
If you want to do this for a docker swarm, you have to change the commands. The network driver "bridge" does not work in swarm mode:
(How to run a container inside a swarm take a look here)

docker service create  --network mybrigde --name helloworld alpine ping 192.168.178.1

Error: No such network: mybrigde
Even if you create your bridge on every node.

You have to configure an overlay network:
alpine:~# docker service create  --network myoverlay --name helloworld alpine ping 192.168.178.1
And then you can deploy your service like this:

alpine:~# docker service create --replicas 2 --network myoverlay  --name helloworld alpine ping 10.200.0.1

ij613sb26sfrgqknq8nnscqeg

overall progress: 2 out of 2 tasks

1/2: running   [==================================================>]

2/2: running   [==================================================>]

verify: Service converged


Verification:

alpine:~# docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

6193ebb361fa        alpine:latest       "ping 10.200.0.1"   12 seconds ago      Up 11 seconds                           helloworld.1.9zoyocdpsdthuqmlk4efk96wz

alpine:~# docker logs 6193ebb361fa

PING 10.200.0.1 (10.200.0.1): 56 data bytes

64 bytes from 10.200.0.1: seq=0 ttl=64 time=0.344 ms

64 bytes from 10.200.0.1: seq=1 ttl=64 time=0.205 ms

64 bytes from 10.200.0.1: seq=2 ttl=64 time=0.184 ms
On each docker swarm node you can find now:
node2:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5019841c7e25        bridge              bridge              local
6e795c964251        docker_gwbridge     bridge              local
9d9fa338a975        host                host                local
273dc1ddbc57        mybrigde            bridge              local
siiyo60iaojs        myoverlay           overlay             swarm
9ff819cf7ddb        none                null                local

and after stopping the service (docker service rm helloworld) the overlay "myoverlay" is removed again:
node2:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5019841c7e25        bridge              bridge              local
6e795c964251        docker_gwbridge     bridge              local
9d9fa338a975        host                host                local
273dc1ddbc57        mybrigde            bridge              local
9ff819cf7ddb        none                null                local


SAN and Wildcard Certificates Certified with EBS 12.2

Steven Chan - Mon, 2018-07-09 08:30

We are pleased to announce that Subject Alternative Name (SAN) and Wildcard Certificates are now certified with Oracle E-Business Suite 12.2 when enabling TLS.

Note: We previously announced certification of SAN and Wildcard Certificates with Oracle E-Business Suite Release 12.1.

What are SAN and Wildcard Certificates?

The use of the SAN field in a certificate request (CSR) allows you to specify multiple host names to be protected by a single public key certificate. Use of SAN will also allow using a single certificate for multiple domains.

A Wildcard Certificate is a public key certificate that can be used with multiple sub-domains of a domain.

Note: The latest releases of some browsers (e.g. Google Chrome) now require a SAN extension. Check your browser to determine if SAN is required.

How do you deploy SAN or Wildcard Certificates?

In the CSR SAN field, you may use the subjectAltName value, and optionally also use the wildcard character:

  • Example 1: SAN field entry for the CSR:

subjectAltName = DNS:www.example.com,DNS:example.com

  • Example 2: SAN field entry with a wildcard for the CSR:

subjectAltName = DNS:*.example.com

If you have already enabled TLS, you may need to redo your CSR using the SAN field. Check with your CA regarding their specific requirements for adding SAN. If you have not enabled TLS, simply follow the instructions for doing so, using the SAN field accordingly.

Note: We highly recommend that all customers migrate to TLS. If you have not already migrated to TLS, please do so as soon as possible.

For complete instructions, refer to the following My Oracle Support Knowledge Document:

Related Articles References
Categories: APPS Blogs

French Children’s Retailer ÏDKIDS COMMUNITY Chooses Oracle Retail Cloud to Make More Profitable Inventory Investments

Oracle Press Releases - Mon, 2018-07-09 08:00
Press Release
French Children’s Retailer ÏDKIDS COMMUNITY Chooses Oracle Retail Cloud to Make More Profitable Inventory Investments Intuitive Planning Cloud Service Enables Retailer to Better Understand Consumer Behavior and Preferences across Brands and Store Locations

Redwood Shores, Calif.—Jul 9, 2018

Today Oracle announced that ÏDKIDS COMMUNITY has selected Oracle Retail Planning and Optimization Cloud Services to gain better understanding of their customers and their behaviors. ÏDKIDS COMMUNITY specializes in children garments and toys and operates retail brands including Okaidi, Jacadi, Obaibi and Oxybul through a network of over 1300 stores in 53 countries. Through the implementation with Oracle Retail Merchandise Financial Planning Cloud Services and Oracle Retail Assortment Planning Cloud Services, ÏDKIDS COMMUNITY will improve the buying and planning of their inventory with a better understanding of their customer behavior and expedite return on technology investment by accelerating digital transformation through cloud infrastructure. Cognira, a Gold level member of Oracle PartnerNetwork (OPN), will guide the retailer through the implementation process and cloud migration.

“Consumer behavior is shifting and we know that having tools with accurate perspective on how customers engage with our brands will ensure we’re making the right merchandising decisions. We launched a search for the right retail solution that could offer deep insights from sales and stock analysis to increase profitability and minimize unnecessary discounts across our portfolio,” said Rodolphe Even, Chief Information Officer, ÏDKIDS COMMUNITY. “After a competitive sales cycle and a detailed proof of concept, we chose Oracle Retail Planning and Optimization Cloud Services for their rich functionality and ability to scale with our business.”

“Our business teams were limited by the tools available to properly plan inventory assortments. With a modular, configured approach, the finance and planning teams can streamline into a single holistic planning process for greater efficiency and accuracy that enables more strategic merchandising investments and eliminates residual stock,” Delphine Brabant, Head of Buying/Product IT department, ÏDKIDS COMMUNITY. “Our buying and product management teams found the Oracle Retail solutions to be intuitive and easy to use.”

“We are delighted to partner with ÏDKIDS COMMUNITY on this initiative to grow their Omnichannel, multi-format business,” said Chris James, Vice President, Oracle Retail. “With Oracle Retail Cloud services at the core of ÏDKIDS COMMUNITY digital transformation project, IT teams can look forward to an investment that will continue to deliver best-of-industry practices as brands continue to grow.”

“Cognira has helped a wide variety of retailers achieve measurable benefits from retail planning, supply chain and analytic software investments and we look forward to bringing this expertise to ÏDKIDS COMMUNITY implementation,” said Hatem Sellami, Founder and Chief Executive Officer, Cognira.

Cognira successfully delivered the first cloud implementation of Oracle Retail globally at Groupe Dynamite in less than 3 months.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Historic Stats

Jonathan Lewis - Mon, 2018-07-09 06:45

If you want to examine historic object stats Oracle gives you a few procedures in the dbms_stats package to compare sets of stats captured at two different time periods, but there’s no view that you can query to get an idea of how a table’s stats have changed over time. This is a problem that can be addressed when you discover two things:

  • There are views to report pending table, index, column and histogram stats.
  • Pending stats are stored stored as “historic” stats with a future date.

Once you’ve spotted the second detail, you can acquire the SQL to generate the pending stats views:


SQL> select view_name from dba_views where view_name like 'DBA%PENDING%STAT%';

VIEW_NAME
--------------------------------------------------------------------------------------------------------------------------------
DBA_TAB_PENDING_STATS
DBA_IND_PENDING_STATS
DBA_COL_PENDING_STATS
DBA_TAB_HISTGRM_PENDING_STATS


SQL> set long 20000
SQL> set pagesize 0
SQL> select dbms_metadata.get_ddl('VIEW','DBA_TAB_PENDING_STATS','SYS') from dual;

  CREATE OR REPLACE FORCE NONEDITIONABLE VIEW "SYS"."DBA_TAB_PENDING_STATS" ("OW
NER", "TABLE_NAME", "PARTITION_NAME", "SUBPARTITION_NAME", "NUM_ROWS", "BLOCKS",
 "AVG_ROW_LEN", "IM_IMCU_COUNT", "IM_BLOCK_COUNT", "SCAN_RATE", "SAMPLE_SIZE", "
LAST_ANALYZED") AS
  select u.name, o.name, null, null, h.rowcnt, h.blkcnt, h.avgrln,
	 h.im_imcu_count, h.im_block_count, h.scanrate, h.samplesize, h.analyzet
ime
  from	 sys.user$ u, sys.obj$ o, sys.wri$_optstat_tab_history h
  where  h.obj# = o.obj# and o.type# = 2 and o.owner# = u.user#
    and  h.savtime > systimestamp
  union all
  -- partitions
  select u.name, o.name, o.subname, null, h.rowcnt, h.blkcnt,
	 h.avgrln, h.im_imcu_count, h.im_block_count, h.scanrate, h.samplesize,
	 h.analyzetime
  from	 sys.user$ u, sys.obj$ o, sys.wri$_optstat_tab_history h
  where  h.obj# = o.obj# and o.type# = 19 and o.owner# = u.user#
    and  h.savtime > systimestamp
  union all
  -- sub partitions
  select u.name, osp.name, ocp.subname, osp.subname, h.rowcnt,
	 h.blkcnt, h.avgrln, h.im_imcu_count, h.im_block_count, h.scanrate,
	 h.samplesize, h.analyzetime
  from	sys.user$ u,  sys.obj$ osp, obj$ ocp,  sys.tabsubpart$ tsp,
	sys.wri$_optstat_tab_history h
  where h.obj# = osp.obj# and osp.type# = 34 and osp.obj# = tsp.obj# and
	tsp.pobj# = ocp.obj# and osp.owner# = u.user#
    and h.savtime > systimestamp

Notice the critical predicate repeated across the UNION ALL: “and h.savtime > systimestamp” – all we have to do is change that to “less than or equal to” (or just delete it if we’re not fussy about reporting pending stats along with historic) then add a few columns reporting the available statistics and we can create a view that we can query for the historic stats.


rem
rem     Script:         optstat_table_history.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem
rem     Notes
rem     Have to be connected as SYS, or have directly
rem     granted privileges on the sys tables to do this
rem

create or replace force view sys.jpl_tab_history_stats (
        owner, table_name, partition_name, subpartition_name,
        num_rows, blocks, avg_row_len, sample_size, last_analyzed
)
as
  select u.name, o.name, null, null, h.rowcnt, h.blkcnt, h.avgrln,
         h.samplesize, h.analyzetime
  from   sys.user$ u, sys.obj$ o, sys.wri$_optstat_tab_history h
  where  h.obj# = o.obj# and o.type# = 2 and o.owner# = u.user#
    and  h.savtime <= systimestamp
  union all
  -- partitions
  select u.name, o.name, o.subname, null, h.rowcnt, h.blkcnt,
         h.avgrln, h.samplesize, h.analyzetime
  from   sys.user$ u, sys.obj$ o, sys.wri$_optstat_tab_history h
  where  h.obj# = o.obj# and o.type# = 19 and o.owner# = u.user#
    and  h.savtime <= systimestamp
  union all
  -- sub partitions
  select u.name, osp.name, ocp.subname, osp.subname, h.rowcnt,
         h.blkcnt, h.avgrln, h.samplesize, h.analyzetime
  from  sys.user$ u,  sys.obj$ osp, obj$ ocp,  sys.tabsubpart$ tsp,
        sys.wri$_optstat_tab_history h
  where h.obj# = osp.obj# and osp.type# = 34 and osp.obj# = tsp.obj# and
        tsp.pobj# = ocp.obj# and osp.owner# = u.user#
    and h.savtime <= systimestamp
;

drop public synonym jpl_tab_history_stats;
create  public synonym jpl_tab_history_stats for sys.jpl_tab_history_stats;
grant select on jpl_tab_history_stats to public;

Now you can query historic stats for any schema and table.

If you don’t want to create a view in the sys schema (and don’t want to create a permanent object at all) you can always use subquery factoring as an easy way of editing the metadata into a suitable query. You have to be connected as a user with privileges (that could be through a role) to view the relevant sys tables, though.


with jpl_tab_history (
        owner, table_name, partition_name, subpartition_name,
        num_rows, blocks, avg_row_len, sample_size, last_analyzed
) as (
  select u.name, o.name, null, null, h.rowcnt, h.blkcnt, h.avgrln,
         h.samplesize, h.analyzetime
  from   sys.user$ u, sys.obj$ o, sys.wri$_optstat_tab_history h
  where  h.obj# = o.obj# and o.type# = 2 and o.owner# = u.user#
    and  h.savtime <= systimestamp
  union all
  -- partitions
  select u.name, o.name, o.subname, null, h.rowcnt, h.blkcnt,
         h.avgrln, h.samplesize, h.analyzetime
  from   sys.user$ u, sys.obj$ o, sys.wri$_optstat_tab_history h
  where  h.obj# = o.obj# and o.type# = 19 and o.owner# = u.user#
    and  h.savtime <= systimestamp
  union all
  -- sub partitions
  select u.name, osp.name, ocp.subname, osp.subname, h.rowcnt,
         h.blkcnt, h.avgrln, h.samplesize, h.analyzetime
  from  sys.user$ u,  sys.obj$ osp, obj$ ocp,  sys.tabsubpart$ tsp,
        sys.wri$_optstat_tab_history h
  where h.obj# = osp.obj# and osp.type# = 34 and osp.obj# = tsp.obj# and
        tsp.pobj# = ocp.obj# and osp.owner# = u.user#
    and h.savtime <= systimestamp
)
select
        table_name, blocks, num_rows, avg_row_len, sample_size, last_analyzed
from
        jpl_tab_history
where
        owner = 'TEST_USER'
and     table_name = 'T1'
order by
        last_analyzed
;

TABLE_NAME               BLOCKS   NUM_ROWS AVG_ROW_LEN SAMPLE_SIZE LAST_ANAL
-------------------- ---------- ---------- ----------- ----------- ---------
T1                          188      10000         120	     10000 04-JUL-18
T1                          322      18192         118	     18192 06-JUL-18

Matching code for indexes, columns and histograms is left as an exercise to the interested reader.

Pages

Subscribe to Oracle FAQ aggregator