Feed aggregator

Oracle Names Charles W. Moorman IV and William G. Parrett to the Board of Directors

Oracle Press Releases - Fri, 2018-05-11 15:30
Press Release
Oracle Names Charles W. Moorman IV and William G. Parrett to the Board of Directors

Redwood Shores, Calif.—May 11, 2018

The Oracle Board of Directors today announced that it has unanimously elected Charles (Wick) Moorman IV and William G. Parrett to the company’s Board of Directors. The election is effective as of May 9, 2018 and increases the size of the Board to 14 directors.

“We are very pleased to have two exceptional leaders join our Board,” said Larry Ellison, Chairman of the Board of Directors and Chief Technology Officer. Bruce Chizen, Chair of the Nomination and Governance Committee, added, “Wick brings significant technology, risk management and regulatory experience to our Board, while Bill brings valuable auditing and financial expertise. Both Wick and Bill are accomplished executives with extensive experience leading large, complex organizations. We are excited to add two additional independent directors to the Board and we look forward to working with both Wick and Bill.”

Mr. Moorman, 66, is a Senior Advisor to Amtrak. He previously served as President and CEO of Amtrak from August 2016 until January 2018. He was previously Chairman from February 2006, and CEO from November 2005, of Norfolk Southern Corporation until 2015. Prior to 2005, he held various positions in operations, information technology and human resources at Norfolk Southern Corporation after joining in 1975. Mr. Moorman serves as a director of Chevron Corporation and Duke Energy Corporation, and previously served as a director of Norfolk Southern Corporation.

Mr. Parrett, 72, served as the Chief Executive Officer of Deloitte Touche Tohmatsu from 2003 until May 2007. Mr. Parrett joined Deloitte in 1967 and served in a series of roles of increasing responsibility until his retirement in 2007. Mr. Parrett serves as a director of The Blackstone Group L.P., Eastman Kodak Company, Conduent Incorporated and Thermo Fisher Scientific Inc. (through May 23, 2018), and previously served as a director of UBS AG and iGATE Corporation. Mr. Parrett is a Certified Public Accountant with an active license.

All members of Oracle’s Board of Directors serve one-year terms and are expected to stand for election at the company’s next annual meeting of stockholders in November 2018.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
1.212.508.7935
deborah.hellinger@oracle.com
Ken Bond
Oracle Investor Relations
1.650.607.0349
ken.bond@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com/investor or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

Statements in this press release relating to Oracle’s future plans, expectations, beliefs, intentions and prospects are “forward-looking statements” and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (“SEC”) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC, by contacting Oracle Corporation’s Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of May 11, 2018. Oracle undertakes no duty to update any statement in light of new information or future events.

Talk to a Press Contact

Deborah Hellinger

  • 1.212.508.7935

Ken Bond

  • 1.650.607.0349

Deploying EDB containers in MiniShift/OpenShift

Yann Neuhaus - Fri, 2018-05-11 10:15

In this post we’ll look at how we can deploy EnterpriseDB containers in MiniShift. When you need to setup MiniShift have a look here. In this post we’ll do the setup with the MiniShift console, in a next post we’ll do the same by using the command line tools.

As a few containers will be running at the end MiniShift got more resources when it was started:

dwe@dwe:/opt$ minishift delete
dwe@dwe:/opt$ minishift start --cpus 4 --disk-size 30GB --memory 4GB

Once MiniShift is up and running open the MiniShift console and login as developer/admin:

dwe@dwe:/opt$ minishift console

Selection_001

The first thing we need to do is to grant the necessary permissions after we stepped into “My Project”:
Selection_002

The permission are in Resources->Membership. Add admin,edit and view to the default account:
Selection_004

For accessing the EnterpriseDB container repository a new secret needs to be created which contains the connection details. Secrets are under Resources->Secrets:
Selection_005
Selection_006

As databases are happy when they can store their data on persistent storage we need a volume. Volumes can be created under “Storage”:
Selection_007
Selection_008

Now we need a local registry where we can push the EnterpriseDB containers to:

dwe@dwe:~$ minishift ssh
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.6, build HEAD : 5ab2289 - Wed Jan 11 03:20:40 UTC 2017
Docker version 1.12.6, build 78d1802
docker@minishift:~$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
Unable to find image 'registry:2' locally
2: Pulling from library/registry
81033e7c1d6a: Pull complete 
...
Status: Downloaded newer image for registry:2
14e85f4e2a36e727a0584803e49bbd690ffdb092c02238a241bd2ad003680625
docker@minishift:~$ docker login containers.enterprisedb.com
Username: dbi-services
Password: 
Login Succeeded
docker@minishift:~$ docker pull containers.enterprisedb.com/test/edb-as:v10.3
v10.3: Pulling from test/edb-as
d9aaf4d82f24: Pulling fs layer 
...
Status: Downloaded newer image for containers.enterprisedb.com/test/edb-as:v10.3
docker@minishift:~$ docker tag containers.enterprisedb.com/test/edb-as:v10.3 localhost:5000/test/edb-as:v10.3
docker@minishift:~$ docker push localhost:5000/test/edb-as:v10.3
The push refers to a repository [localhost:5000/test/edb-as]
274db5c4ff47: Preparing 
...
docker@minishift:~$ docker pull containers.enterprisedb.com/test/edb-pgpool:v3.5
v3.5: Pulling from test/edb-pgpool
...
docker@minishift:~$ docker tag containers.enterprisedb.com/test/edb-pgpool:v3.5 localhost:5000/test/edb-pgpool:v3.5
docker@minishift:~$ docker push localhost:5000/test/edb-pgpool:v3.5
The push refers to a repository [localhost:5000/test/edb-pgpool]
8a7df26eb139: Pushed 
...

This is all what is required for the preparation. The next step is to import to the template which specifies the setup. For this little demo we’ll use this one:

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-0
   annotations:
    description: "Standard EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres' 
          - name: REPL_PASSWORD
            value: 'postgres' 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: containers.enterprisedb.com/test/edb-pgpool:v3.5
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres' 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres' 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres' 
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: containers.enterprisedb.com/test/edb-as:v10.3
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

For importing that into OpenShift go to “Overview” and select “Import YAML/JSON”:
Selection_010
Selection_011
Selection_012

This imports the template but does not process it right now. When you go back to “Overview” you should see a new template which you can provision:
Selection_013
Selection_014

Selecting the new template brings you to the specification of the variables. The only bits you need to adjust are the values for the volume and the volume claim:
Selection_015
Selection_016

A few moments later the EDB containers are up and running:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-fdr5j   1/1       Running   0          1m
edb-pgpool-1-9twmc   1/1       Running   0          1m
edb-pgpool-1-m5x44   1/1       Running   0          1m

Current there are two pgpool instances and one database instance container. You can double check that the instance is really running with:

dwe@dwe:~$ oc rsh edb-as10-0-1-fdr5j
sh-4.2$ psql postgres
psql.bin (10.3.8)
Type "help" for help.

postgres=# select version();
                                                   version                                                   
-------------------------------------------------------------------------------------------------------------
 EnterpriseDB 10.3.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

Going back to the “Overview” page in the console shows the same information:
Selection_019

In the next post we’ll scale up the deployment by adding two replicas and configure access from outside the cluster.

 

Cet article Deploying EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

SP2 for SQL Server 2016 is available with new helpful DMVs

Yann Neuhaus - Fri, 2018-05-11 09:23

Last month (April 24, 2018), the Service Pack 2 for SQL Server 2016 was released and distributed.
This Service Pack has new DMVs, already available in SQL Server 2017 RTM.

In this article, I will just write few words about 2 DMVs (sys.dm_db_log_stats & sys.dm_db_log_info) and a new column (modified_extent_page_count) in the DMV sys.dm_db_file_space_usage that I presented during our last event about SQL Server 2017. I think they are really helpful for DBA.
It’s also the opportunity to present you the demo that I create for our Event.

Preparation

First, I create the database smart_backup_2016 and a table Herge_Heros

CREATE DATABASE [smart_backup_2016]
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'smart_backup_2016', FILENAME = N'G:\MSSQL\Data\smart_backup_2016.mdf' )
 LOG ON
( NAME = N'smart_backup_2016_log', FILENAME = N'G:\MSSQL\Log\smart_backup_2016_log.ldf' )
GO

USE smart_backup_2016
GO

CREATE TABLE [dbo].[Herge_Heros]
   (
   [ID] [int] NULL,
   [Name] [nchar](10) NULL
   ) ON [PRIMARY]
GO

I do a little insert and run a first Full and a first TLog Backup

INSERT INTO [Herge_Heros] VALUES(1,'Tintin') -- Tim
INSERT INTO [Herge_Heros] VALUES(2,'Milou') -- Struppi


BACKUP DATABASE [smart_backup_2016] TO  DISK = N'C:\Temp\smart_backup.bak' WITH NOFORMAT, NOINIT,  NAME = N'smart_backup-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO
BACKUP Log [smart_backup_2016] TO  DISK = N'C:\Temp\smart_backup.log' WITH NOFORMAT, NOINIT,  NAME = N'smart_backup-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

After, I insert a lot of line to have more than 50% modified pages

INSERT INTO [Herge_Heros] VALUES(3,'Quick') --Strups
INSERT INTO [Herge_Heros] VALUES(4,'Flupke')  --Stepppke
GO 100000

Now, the demo is ready!

new column modified_extent_page_count in sys.dm_db_file_space_usage

smart_backup01
As you can see in this screenshot, the column is really existing in SQL Server 2016 SP2 (13.0.5026.0).
After, you can, like us in our DMK maintenance, create an adapted Backup Strategy depending from changes and no more depending from the time.
In this stored procedure, if the modified pages are greater than 50% of the total pages, it will do a Full Backup and if the modified pages are less than 50%, it will do a Differential Backup.

USE [dbi_tools]
GO

CREATE or ALTER PROCEDURE [maintenance].[dbi_smart_backup] @database_name sysname
as
DECLARE @pages_changes Numeric(10,0)
DECLARE @full_backup_threshold INT
DECLARE @diff_backup_threshold INT
DECLARE @sql_query nvarchar(max)
DECLARE @page_change_text nvarchar(20)
DECLARE @param nvarchar(50)
DECLARE @backupfile nvarchar(2000)
SET @full_backup_threshold=50
SET @diff_backup_threshold=0
SET @param = N'@pages_changesOUT nvarchar(20) OUTPUT'
SET @sql_query =N'SELECT @pages_changesOUT=( 100 * Sum(modified_extent_page_count) / Sum(total_page_count) ) FROM ['+@database_name+'].sys.dm_db_file_space_usage'

EXECUTE sp_executesql @sql_query,@param ,@pages_changesOUT=@page_change_text OUTPUT; 
SET @pages_changes = CAST(@page_change_text AS Numeric(10,0)) 
IF @pages_changes > @full_backup_threshold
  BEGIN
     --Full Backup threshold exceeded, take a full backup
     Print 'Full Backup Threshold exceeded, take a full backup'
     SET @backupfile = N'C:\Temp\'+@database_name+N'_' + replace(convert(nvarchar(50), GETDATE(), 120), ':','_') + N'.bak'
   BACKUP DATABASE @database_name TO DISK=@backupfile
  END
  ELSE
  BEGIN
	   IF @pages_changes >= @diff_backup_threshold
		BEGIN
			-- Diff Backup threshold exceeded, take a differential backup
			Print 'Diff Backup threshold exceeded, take a differential backup'
			SET @backupfile = N'C:\Temp\'+@database_name+N'_' + replace(convert(nvarchar(50), GETDATE(), 120), ':','_') + N'.dif'
			BACKUP DATABASE @database_name TO DISK=@backupfile WITH differential
		END
	ELSE
		BEGIN
			-- No threshold exceeded, No backup
		PRINT 'No threshold exceeded, No backup'   
		END
  END
GO

Now, I run the stored procedure [maintenance].[dbi_smart_backup] in the dbi_tool

USE smart_backup_2016;
GO
EXEC [dbi_tools].[maintenance].[dbi_smart_backup] @database_name = N'smart_backup_2016'

smart_backup02
The dbi backup Stored Procedure in this case do a Full Backup because the modified pages are 64%.
I check the status of the modified pages and the modified pages are at 5%.
smart_backup03
If I restart the stored procedure, I do a differential backup.
smart_backup04
My backup strategy is really adapted to the change of pages in the database and no more based on the time (RTO vs RPO).
Let’s go to the new DMV sys.dm_db_log_stats do to the same with the TLog backup.

DMV sys.dm_db_log_stats

This DMV gives really good information about the transaction log files and can help to adapt the backup strategy and also control the growth of the file.
The DMV is very easy to use and for example, if you want to have the growth of the size since the last TLog backup, use the column log_since_last_log_backup_mb

SELECT log_since_last_log_backup_mb from sys.dm_db_log_stats(DB_ID('smart_backup_2016'))
GO

smart_backup05
Like below, I create in our DMK maintenance an adapted TLOG Backup [dbi_smart_tlog_backup] smart_backup06
If the TLOG is growing more that 5 MB from the last TLOG backup, It will do a TLOG Backup and if not, no TLOG Backup.
In my example, the growth is 548 MB, then a TLOG Backup is necessary.
smart_backup07
After, I control the size and as you can see the size since last TLOG Backup is 0.07MB
smart_backup08
As you can see, no TLOG backup… My backup strategy is adapted to the load! ;-)
smart_backup09

DMV sys.dm_db_log_info

This DMV will help us to have all VLF(Virtual Log File) information and no more using the DBCC Loginfo.
You can use this DMV very easily like this:

SELECT [name] AS 'Database Name', COUNT(l.database_id) AS 'VLF Count'
FROM sys.databases s
CROSS APPLY sys.dm_db_log_info(s.database_id) l
GROUP BY [name]

smart_backup10

These DMVs are very helpful and it is a good thing to have it also in SQL Server 2016 now.

 

Cet article SP2 for SQL Server 2016 is available with new helpful DMVs est apparu en premier sur Blog dbi services.

Skip Scan 3

Jonathan Lewis - Fri, 2018-05-11 08:26

If you’ve come across any references to the “index skip scan” operation for execution plans you’ve probably got some idea that this can appear when the number of distinct values for the first column (or columns – since you can skip multiple columns) is small. If so, what do you make of this demonstration:


rem
rem     Script:         skip_scan_cunning.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem

begin
        dbms_stats.set_system_stats('MBRC',16);
        dbms_stats.set_system_stats('MREADTIM',10);
        dbms_stats.set_system_stats('SREADTIM',5);
        dbms_stats.set_system_stats('CPUSPEED',1000);
end;
/

create table t1
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        rownum                          id1,
        rownum                          id2,
        lpad(rownum,10,'0')             v1,
        lpad('x',150,'x')               padding
/*
        cast(rownum as number(8,0))                     id,
        cast(lpad(rownum,10,'0') as varchar2(10))       v1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
*/
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

create index t1_i1 on t1(id1, id2);

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

For repeatability I’ve set some system statistics, but if you’ve left the system stats to default you should see the same effect. All I’ve done is create a table and an index on that table. The way I’ve defined the id1 and id2 columns means they could individually support unique constraints and the index clearly has 1 million distinct values for id1 in the million index entries. So what execution plan do you think I’m likely to get from the following simple query:


set serveroutput off
alter session set statistics_level = all;

prompt  =======
prompt  Default
prompt  =======

select  id 
from    t1
where   id2 = 999
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

You’re probably not expecting an index skip scan to appear, but given the title of this posting you may have a suspicion that it will; so here’s the plan I got running this test on 12.2.0.1:


SQL_ID  8r5xghdx1m3hn, child number 0
-------------------------------------
select id from t1 where id2 = 999

Plan hash value: 400488565

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      1 |        |  2929 (100)|      1 |00:00:00.17 |    2932 |      5 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |      1 |  2929   (1)|      1 |00:00:00.17 |    2932 |      5 |
|*  2 |   INDEX SKIP SCAN                   | T1_I1 |      1 |      1 |  2928   (1)|      1 |00:00:00.17 |    2931 |      4 |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID2"=999)
       filter("ID2"=999)


So, an index skip scan doesn’t require a small number of distinct values for the first column of the index (unless you’re running a version older than 11.2.0.2 where a code change appeared that could be disabled by setting fix_control 9195582 off).

When the optimizer doesn’t do what you expect it’s always worth hinting the code to follow the plan you were expecting – so here’s the effect of hinting a full tablescan (which happened to do direct path reads):

SQL_ID  bxqwhsjwqfm7q, child number 0
-------------------------------------
select  /*+ full(t1) */  id from t1 where id2 = 999

Plan hash value: 3617692013

----------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |  3317 (100)|      1 |00:00:00.12 |   25652 |  25635 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |      1 |  3317   (3)|      1 |00:00:00.12 |   25652 |  25635 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID2"=999)

Note that the cost is actually more expensive than the cost of the indexed access path.  For reference you need to know that the blocks statistic for the table was 25,842 while the number of index leaf blocks was 2,922. The latter figure (combined with a couple of other details regarding the clustering_factor and undeclared uniqueness of the index) explains why the cost of the skip scan was only 2,928: the change that appeared in 11.2.0.2 limited the I/O cost of an index skip scan to the total number of leaf blocks in the index.  The tablescan cost (with my system stats) was basically dividing my table block count by 16 (to get the number of multi-block reads) and then doubling (because the multiblock read time is twice the single block read time).

As a quick demo of how older versions of Oracle would behave after setting “_fix_control”=’9195582:OFF’:


SQL_ID	bn0p9072w9vfc, child number 1
-------------------------------------
select	/*+ index_ss(t1) */  id from t1 where id2 = 999

Plan hash value: 400488565

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation			    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |	A-Time	 | Buffers |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		    |	    |	   1 |	      |  1001K(100)|	  1 |00:00:00.13 |    2932 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |	   1 |	    1 |  1001K	(1)|	  1 |00:00:00.13 |    2932 |
|*  2 |   INDEX SKIP SCAN		    | T1_I1 |	   1 |	    1 |  1001K	(1)|	  1 |00:00:00.13 |    2931 |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID2"=999)
       filter("ID2"=999)

The cost of the skip scan is now a little over 1,000,000 – corresponding (approximately) to the 1 million index probes that will have to take place. You’ll notice that the number of buffer visits recorded is 2931 for the index operation, though: this is the result of the run-time optimisation that keeps buffers pinned very aggressively for skip scan – you might expect to see a huge number of visits recorded as “buffer is pinned count”, but for some reason that doesn’t happen. The cost is essentially Oracle calculating (with pinned root and branch) the cost of “id1 = {constant} and id2 = 999” and multiplying by ndv(id1).

Footnote:

Ideally, of course, the optimizer ought to work out that an index fast full scan followed by a table access ought to have a lower cost (using multi-block reads rather than walking the index in leaf block order one block at a time (which is what this particular skip scan will have to do) – but that’s not (yet) an acceptable execution plan though it does now appear a plan for deleting data.

tl;dr

If you have an index that is very much smaller than the table you may find examples where the optimizer does what appears to be an insanely stupid index skip scan when you were expecting a tablescan or, possibly, some other less efficient index to be used. There is a rationale for this, but such a plan may be much more CPU and read intensive than it really ought to be.

 

All Parent - Child tables in the database

Tom Kyte - Fri, 2018-05-11 08:06
Hi Tom, Can you please explain the way to get a list of all parent child relation in the database. The list should have the Grand parent as the first item and the last item will be the grand child. For Example, Parent ...
Categories: DBA Blogs

performance tuning - sql slows down after gather stats

Tom Kyte - Fri, 2018-05-11 08:06
Hi , I have faced a situation where sql id plan hash value is changed due stats gather on one of table currently i dont understand why this stats gathering cause chnage in plan and due to which execution time is poor now can you guide...
Categories: DBA Blogs

insert into local table with select from multiple database links in a loop

Tom Kyte - Fri, 2018-05-11 08:06
Hi Tom, i would like to apply the Orignial SQL Statement from Oracle MOS DOC ID 1317265.1 and 1309070.1 for license and healthcheck for all of my database instances. My Goal is to create a centralized repository with informations of my databases. Un...
Categories: DBA Blogs

Impdp not failing even if target table have missing column

Tom Kyte - Fri, 2018-05-11 08:06
My question why import is not failing even the source and target have different table structure <b>Source DB</b> has below table (with additional column COL3 and populated SQL> desc tab1 Name Null? Type ---------------------------...
Categories: DBA Blogs

DataGuard Convention

Michael Dinh - Fri, 2018-05-11 06:58

Good convention and implementation make life and automation so much simpler and more time for golfing.

I have seen some really poor and really good implementation and here’s a good one.

Wish I can take credit for it and unfortunately I cannot.

The scripts were created by whoa.

Scripts an be run from primary or standby for any instances provided profile to source database environment exists on host.

Use ORACLE_UNQNAME for DataGuard Environment

====================================================================================================
+++ PRIMARY RACONENODE
====================================================================================================
SQL> show parameter db%name
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      test
db_unique_name                       string      test
SQL> 

$ sysresv|tail -1
Oracle Instance alive for sid "test_1"

$ env|grep ORACLE

ORACLE_SID=test_1 (db_name)
ORACLE_UNQNAME=test (db_unique_name)

$ srvctl config database -d $ORACLE_UNQNAME
Database unique name: test
Database name: test
Oracle home: /u01/app/oracle/product/11g/db_1
Oracle user: oracle
Spfile: +FLASH/test/spfiletest.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: test
Database instances:
Disk Groups: FLASH,DATA
Mount point paths:
Services: testsvc
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: test
Candidate servers: host01,host02
Database is administrator managed

====================================================================================================
+++ STANDBY NON-RAC
====================================================================================================
SQL> show parameter db%name
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      test
db_unique_name                       string      testdr
SQL> 

$ sysresv|tail -1
Oracle Instance alive for sid "test"

$ env|grep ORACLE
ORACLE_SID=test (db_name)
ORACLE_UNQNAME=testdr (db_unique_name)

$ srvctl config database -d $ORACLE_UNQNAME
Database unique name: testdr
Database name: test
Oracle home: /u01/app/oracle/product/11g/db_1
Oracle user: oracle
Spfile:
Domain:
Start options: open
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Database instance: test
Disk Groups: DATA,FLASH
Services:

====================================================================================================
DATAGUARD BROKER CONFIGURATION
====================================================================================================
DGMGRL> show configuration

Configuration - dg_test (db_name)

  Protection Mode: MaxPerformance
  Databases:
    test   - Primary database (db_unique_name)
    testdr - Physical standby database (db_unique_name)

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database test

Database - test

  Role:            PRIMARY
  Intended State:  TRANSPORT-OFF
  Instance(s):
    test_1
    test_2

Database Status:
SUCCESS

DGMGRL> show database testdr

Database - testdr

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-OFF
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       7 seconds (computed 0 seconds ago)
  Apply Rate:      (unknown)
  Real Time Query: OFF
  Instance(s):
    test

Database Status:
SUCCESS

DGMGRL> exit

====================================================================================================
ls -l dg*.sh
====================================================================================================
-rwxr-xr-x    1 oracle   dba             377 May 08 21:50 dg_lag.sh
-rwxr-x---    1 oracle   dba             445 May 08 20:12 dg_start.sh
-rwxr-xr-x    1 oracle   dba             337 May 08 20:05 dg_status.sh
-rwxr-x---    1 oracle   dba             447 May 08 20:12 dg_stop.sh

====================================================================================================
dg_lag.sh
====================================================================================================
#!/bin/sh -e
check_dg()
{
dgmgrl -echo << END
connect /
show database ${ORACLE_SID} SendQEntries
show database ${ORACLE_UNQNAME} RecvQEntries
show database ${ORACLE_UNQNAME}
exit
END
}
. ~/oracle_staging
check_dg
. ~/oracle_testing
check_dg
exit

====================================================================================================
cat dg_start.sh
====================================================================================================
#!/bin/sh -e
check_dg()
{
dgmgrl -echo << END
connect /
edit database ${ORACLE_SID} set state='TRANSPORT-ON';
edit database ${ORACLE_UNQNAME} set state='APPLY-ON';
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
}
. ~/oracle_staging
check_dg
. ~/oracle_testing
check_dg
exit

====================================================================================================
dg_status.sh
====================================================================================================
#!/bin/sh -e
check_dg()
{
dgmgrl -echo << END
connect /
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
}
. ~/oracle_staging
check_dg
. ~/oracle_testing
check_dg
exit

====================================================================================================
dg_stop.sh
====================================================================================================
#!/bin/sh -e
check_dg()
{
dgmgrl -echo << END
connect /
edit database ${ORACLE_SID} set state='TRANSPORT-OFF';
edit database ${ORACLE_UNQNAME} set state='APPLY-OFF';
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
}
check_dg
. ~/oracle_staging
check_dg
. ~/oracle_testing
check_dg
exit


Using Oracle Ksplice for CVE-2018-8897 and CVE-2018-1087

Wim Coekaerts - Thu, 2018-05-10 17:15
Just the other day I was talking about using ksplice again and then just after these 2 new CVEs hit that are pretty significant. So, another quick # uptrack-upgrade and I don't have to worry about these CVEs any more.  Sure beats all those rebooting 'other' Linux OS servers. [root@vm1-phx opc]# uname -a Linux vm1-phx 4.1.12-112.16.4.el7uek.x86_64 #2 SMP Mon Mar 12 23:57:12 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux [root@vm1-phx opc]# uptrack-uname -a Linux vm1-phx 4.1.12-124.14.3.el7uek.x86_64 #2 SMP Mon Apr 30 18:03:45 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux [root@vm1-phx opc]# uptrack-upgrade The following steps will be taken: Install [92m63il8] CVE-2018-8897: Denial-of-service in KVM breakpoint handling. Install [3rt72vtm] CVE-2018-1087: KVM guest breakpoint privilege escalation. Go ahead [y/N]? y Installing [92m63il8] CVE-2018-8897: Denial-of-service in KVM breakpoint handling. Installing [3rt72vtm] CVE-2018-1087: KVM guest breakpoint privilege escalation. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.14.5.el7uek

Unique key across tables

Tom Kyte - Thu, 2018-05-10 13:46
Dear tom, How can i enforce unique key across multiple tables. Table1 and Table2 both have ID primary key column. Is it possible to restrict, while inserting and updating into these tables, the union of ID values from two tables are unique. r...
Categories: DBA Blogs

SEQUENCE

Tom Kyte - Thu, 2018-05-10 13:46
hi tom, during one interview i got one question in sequence i.e if there is one sequnce whose max value is 40,but after got nextval 20.without execute the select query 20,000 times and without alter the sequence i want to get 20,000 in the nextva...
Categories: DBA Blogs

How much data is there in your database?

Kubilay Çilkara - Thu, 2018-05-10 13:35
Have you ever thought how much of your database is actually data?

Sometimes you need to ask this most simple question about your database to figure out what the real size of your data is.

Databases store loads of auxiliary data such as indexes and materialized views and other structures where the original data is repeated. Many times databases repeat the data in indexes and materialized views for the sake of achieving better performance for the applications they server, and this repetition is legitimate.

But should this repetition be measured and counted as database size?

To make things worse, many databases due to many updates and deletes, over time create white space in their storage layer. This white space is fragmented free space which can not be re-used by new data entries. Often it might even end up being scanned in full table scans unnecessarily, eating up your resources. But most unfortunate of it all is that it will appear as if it is data in your database size measurements when usually it is not! White space is just void.

There are mechanisms in databases which will automatically remedy the white space and reset and re-organise the storage of data. Here is a link which talks about this in length https://oracle-base.com/articles/misc/reclaiming-unused-space 

One should be diligent when measuring database sizes, there is loads of data which is repeated and some which is just the blank void due to fragmentation and white-space.

So, how do we measure?

Below is a database size measuring SQL script which can be used with Oracle to show data (excluding the indexes) in tables and partitions. It also tries to estimate real storage (in the actual_gb column) excluding the whitespace by multiplying the number of rows in a table with the average row size. Replace the '<YOURSCHEMA>' bit with the schema you wish to measure.

SELECT
    SUM(actual_gb),
    SUM(segment_gb)
FROM
    (
        SELECT
            s.owner,
            t.table_name,
            s.segment_name,
            s.segment_type,
            t.num_rows,
            t.avg_row_len,
            t.avg_row_len * t.num_rows / 1024 / 1024 / 1024 actual_gb,
            SUM(s.bytes) / 1024 / 1024 / 1024 segment_gb
        FROM
            dba_segments s,
            dba_tables t
        WHERE
            s.owner = '<YOURSCHEMA>'
            AND   t.table_name = s.segment_name
            AND   segment_type IN (
                 'TABLE'
                ,'TABLE PARTITION'
                ,'TABLE SUBPARTITION'
            )
        GROUP BY
            s.owner,
            t.table_name,
            s.segment_name,
            s.segment_type,
            t.num_rows,
            t.avg_row_len,
            t.avg_row_len * t.num_rows / 1024 / 1024 / 1024
    );
Categories: DBA Blogs

SQL Developer Web on the Oracle Cloud

Yann Neuhaus - Thu, 2018-05-10 12:21

You like SQL Developer because it is easy to install (just unzip a jar) and has a lot of features? Me too. It can be even easier if it is provided as a web application: no installation, and no java to take all my laptop RAM…
When I say no installation, you will see that you have some little things to setup here in DBaaS. That will probably be done for you in the managed services (PDBaaS) such as ‘Express’ and ‘Autonomous’ ones.

CaptureSDW010
Be careful, Oracle is a Top-Down deployment company. It seems that new products are announced first and then people have to work hard to make them available. Which means that if, like me, you want to test them immediately you may encounter some disappointment.
The announce was there. The documentation was there, mentioning that the Cloud Tooling must be upgraded to 18.2.3. But 18.2.3 was there only a few days later. You can check it from the place where the DBaaS looks for its software. Check from https://storage.us2.oraclecloud.com/v1/dbcsswlibp-usoracle29538/dbaas_patch if you a are not sure.

So, before being able to see SQL Developer in the colorful DBaaS landing page (where you can also access APEX for example) there’s a bit of command line stuff to do as root.

Install the latest Cloud Tooling

SQL Developer Web needs to be installed with the latest version of ORDS, which is installed with the latest version of Cloud Tooling aka dbaastools.rpm

You need to connect as root, so opc and then sudo

ssh opc@144.21.89.223
sudo su

Check if there is a new version to install:

dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}'

If something is returned (such as 18.2.3.1.0_180505.1604) you install it:

dbaascli dbpatchm --run -toolsinst -rpmversion=$(dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}')

Actually I got an error, and I had to ^C:

[root@DB18c opc]# dbaascli dbpatchm --run -toolsinst -rpmversion=$(dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}')
DBAAS CLI version 1.0.0
Executing command dbpatchm --run -toolsinst -rpmversion=18.2.3.1.0_180505.1604 -cli
/var/opt/oracle/patch/dbpatchm -toolsinst -rpmversion=18.2.3.1.0_180505.1604 -cli
Use of uninitialized value in concatenation (.) or string at /var/opt/oracle/patch/dbpatchm line 4773.
^C

But finally, it was installed because the ‘list_tools’ above returns nothing.

Enable SQL Developer Web

SQL Developer Web (SDW) is running in ORDS (Oracle REST Data Services) and must be enabled with the ORDS Assistant with the enable_schema_for_sdw action.
Here I’ll enable it at CDB level. I provide a password for the SDW schema. I create it in a file:

cat > password.txt <<<'Ach1z0#d'

You may secure that better than I do, as I’m putting the password on command line here. But this is only a test.

Then, still as root, I call the ORDS assistant to install SDW in C##SQLDEVWEB (as I’m installing it in CDB$ROOT I need a common user name).


/var/opt/oracle/ocde/assistants/ords/ords -ords_action=enable_schema_for_sdw -ords_sdw_schema="C##SQLDEVWEB" -ords_sdw_schema_password=$PWD/password.txt -ords_sdw_schema_enable_dba=true

Here is the output. The last lines are important:

WARNING: Couldn't obtain the "dbname" value from the assistant parameters nor the "$OCDE_DBNAME" environment variable
Starting ORDS
Logfile is /var/opt/oracle/log/ords/ords_2018-05-10_10:44:12.log
Config file is /var/opt/oracle/ocde/assistants/ords/ords.cfg
INFO: Starting environment summary checks...
INFO: Database version : 18000
INFO: Database CDB : yes
INFO: Original DBaaS Tools RPM installed : dbaastools-1.0-1+18.1.4.0.0_180123.1336.x86_64
INFO: Actual DBaaS Tools RPM installed : dbaastools-1.0-1+18.2.3.1.0_180505.1604.x86_64
INFO: DBTools JDK RPM installed : dbtools_jdk-1.8.0-2.74.el6.x86_64
INFO: DBTools JDK RPM "/var/opt/oracle/rpms/dbtools/dbtools_jdk-1.8.0-2.74.el6.x86_64.rpm" MD5 : 48f13bb401677bfc7cf0748eb1a6990d
INFO: DBTools ORDS Standalone RPM installed : dbtools_ords_standalone-18.1.0.11.22.15-1.el6.x86_64
INFO: DBTools ORDS Standalone RPM "/var/opt/oracle/rpms/dbtools/dbtools_ords_standalone-18.1.0.11.22.15-1.el6.x86_64.rpm" MD5 : 480355ac3ce0f357d5741c2c2f688901
INFO: DBTools DBaaS Landing Page RPM installed : dbtools_dbaas_landing_page-2.0.0-1.el6.x86_64
INFO: DBTools DBaaS Landing Page RPM "/var/opt/oracle/rpms/dbtools/dbtools_dbaas_landing_page-2.0.0-1.el6.x86_64.rpm" MD5 : af79e128a56b38de1c3406cfcec966db
INFO: Environment summary completed...
INFO: Action mode is "full"
INFO: Database Role is "PRIMARY"
INFO: Enabling "C##SQLDEVWEB" schema in "CDB$ROOT" container for SQL Developer Web...
 
SQL*Plus: Release 18.0.0.0.0 Production on Thu May 10 10:44:27 2018
Version 18.1.0.0.0
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
 
Connected to:
Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
 
SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL Developer Web user enable starting...
Enabling "C##SQLDEVWEB" user for SQL Developer Web...
 
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Creating "C##SQLDEVWEB" user
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Call completed.
Commit complete.
PL/SQL procedure successfully completed.
Session altered.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
 
"C##SQLDEVWEB" user enabled successfully. The schema to access SQL Developer Web
is "c_sqldevweb"...
 
PL/SQL procedure successfully completed.
 
SQL Developer Web user enable finished...
Disconnected from Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
INFO: To access SQL Developer Web through DBaaS Landing Page, the schema "c_sqldevweb" needs to be provided...
INFO: "C##SQLDEVWEB" schema in the "CDB$ROOT" container for SQL Developer Web was enabled successfully...
 

The information to remember here is that I will have to provide the c_sqldevweb schema name (which is the schema name I’ve provided but lowercased and with sequences of ‘special’ characters replaced by an underscore). It is lowercased, but it seems that the schemaname has to be provided in uppercase.

Basically what has been done is quite simple: create the C##SQLDEVWEB user and call ORDS.ENABLE_SCHEMA to enable it and map it to the url.

DBCS Landing Page 2.0.0

Now I’m ready to see SQL Developer on the DBCS Landing Page. You access this page by:

  1. Enabling https access from internet (in Access Rules, enable ora_p2_httpssl)
  2. going to default web page for your service, in my case https://144.21.89.223

You may have to accept some self-signed certificates

And here it is with SQL Developer Web in the middle:
CaptureSDW011

The above shows PDB1/pdbadmin for the schema but I installed it at CDB level and the log above tells me that the schema is c_sqldevweb, so given the input, I change the schema to c_sqldevweb then on the login page. Finally, the direct url in my example is https://144.21.89.223/ords/c_sqldevweb/_sdw.

I enter C##SQLDEVWEB (uppercase here) as the user and Ach1z0#d as the password.

And here is the Dashboard:
CaptureSDW012

Do not worry about the 97% storage used which tells me that SYSTEM is full. My datafiles are autoextensible.

Just go to the SQL Worksheet and check your files:

select tablespace_name,bytes/1024/1024 "MBytes", maxbytes/1024/1024/1024 "MaxGB", autoextensible from dba_data_files

Enable SDW for local PDB user

To enable a PDB local user, I run ORDS assistant with a local user name (PDBADMIN here) and an additional parameter with the PDB name (PDB1 here).


cat > password.txt <<<'Ach1z0#d'
/var/opt/oracle/ocde/assistants/ords/ords -ords_action=enable_schema_for_sdw -ords_sdw_schema=PDBADMIN -ords_sdw_schema_password=$PWD/password.txt -ords_sdw_schema_enable_dba=true -ords_sdw_schema_container=PDB1

Now, I can connect to it with PDB1/pdbadmin as schema name.

Error handling

CaptureRestCallFail
If, like me, you are not used to ORDS applications, you may waste some minutes looking at a splash screen waiting for the result. Always look at the message bar. All actions are REST calls and the message bar will show if a call is running or completed successfully or not. The example on the right shows ‘call failed’. You can click on it to see the REST call, and the error.

 

Cet article SQL Developer Web on the Oracle Cloud est apparu en premier sur Blog dbi services.

Creating a Custom Component in Oracle JET - Gökhan

Introduction Oracle JET (JavaScript Extension Toolkit) is a collection of open source libraries for JavaScript developers to develop client-side applications. It comes with lots of responsive...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Construction and Engineering Enables Earned Value Management to Improve Project Delivery

Oracle Press Releases - Thu, 2018-05-10 07:00
Press Release
Oracle Construction and Engineering Enables Earned Value Management to Improve Project Delivery Enhancements to Oracle’s Primavera Unifier deliver new levels of visibility into project progress and performance

Redwood Shores, Calif.—May 10, 2018

Oracle Construction and Engineering today announced enhancements to Oracle’s Primavera Unifier that enable users to perform earned value management (EVM) to better analyze the progress and performance of projects.

Earned value, a critical dimension of the execution of large and complex projects, provides an integrated view of progress that encompasses cost, scope, and schedule, enabling deeper project analysis and more intelligent decision-making. The EVM methodology entails comparing the amount and cost of what was planned to be completed against what work has actually been completed, and how much that work has cost. Such a comparison enables greater precision in forecasting the final cost of the project and whether it will be completed on, behind, or ahead of schedule.

With evolving government standards and securities laws increasing pressure to adopt stringent cost and earned-value standards, many organizations today recognize the need to incorporate comprehensive cost management and earned-value analysis capabilities into their project portfolio management systems.

The new Primavera Unifier EVM capability allows users to leverage data from Primavera P6 Enterprise Project Portfolio Management to:

  • Import multiple projects from Primavera P6 EPPM into a single Primavera Unifier project activity sheet, creating a consolidated view of the costs and earned value. The new EVM capability in Primavera Unifier incorporates resource spreads and progress information from the Primavera P6 EPPM schedule data.
  • Create rate sheets by resource and role with escalating rates. Rate sheets can also be created at a company or project level and be assigned to a mirror of the Primavera P6 EPPM projects within Primavera Unifier through the activity sheets. This allows different rates to be assigned to each P6 project and even to the P6 project baselines.
  • Pull data from the activity sheet into the EVM module, which will display industry standard graphics in addition to various critical project metrics, including historical trending.
 

“Earned value management is an increasingly important project delivery process that enables organizations to understand key dimensions of project progress and performance. The data that the new EVM capability in Oracle’s Primavera Unifier yields will enable project delivery professionals to improve outcomes through better visibility and smarter decision making,” said Andy Verone, Vice President of Strategy for Oracle Construction and Engineering.

For more information about these new enhancements to Oracle’s Primavera Unifier, register to attend a webinar on EVM and Oracle.

Contact Info
Judi Palmer
Oracle
+1 650 506 0266
judi.palmer@oracle.com
Kristin Reeves
Blanc and Otus
+1 925 787 6744
Kristin.reeves@blancandotus.com
About Oracle Construction and Engineering

Oracle Construction and Engineering helps companies reimagine their businesses. With best-in-class project management solutions, organizations can proactively manage projects, gain complete visibility, improve collaboration, and manage change. Our cloud-based solutions for global project planning and execution can help improve strategy execution, operations, and financial performance. For more information, please visit www.oracle.com/construction-and-engineering.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 506 0266

Kristin Reeves

  • +1 925 787 6744

New OA Framework 12.2.6 Update 12 Now Available

Steven Chan - Thu, 2018-05-10 06:00

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure.

We periodically release updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.6 is now available:

Oracle Application Framework (FWK) Release 12.2.6 Bundle 12 (Patch 27675364:R12.FWK.C)

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.6 users should apply this patch. Future OAF patches for EBS Release 12.2.6 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.6 bundle patches.

In addition, this latest bundle patch includes fixes for the following issues:

  • The Publish column is displayed on the inline attachment popup window even if the Document Catalog option is disabled.
  • Breadcrumbs are not wrapped, which results in the appearance of a horizontal scroll bar on the page.
  • Users are unable to add inline attachments when creating invoices.

Related Articles

Categories: APPS Blogs

Build Oracle Cloud Infrastructure custom Images with Packer on Oracle Developer Cloud

OTN TechBlog - Wed, 2018-05-09 15:55

In the April release of Oracle Developer Cloud Service we started supporting Docker and Terraform builds as part of the CI & CD pipeline. Terraform helps you provision Oracle Cloud Infrastructure instance as part of the build pipeline. But what if you want to provision the instance using a custom image instead of the base image? You need a tool like Packer to script your way into building images. So with Docker build support we can now build Packer based images as part of build pipeline in Oracle Developer Cloud. This blog will help you to understand how you can use Docker and Packer together on Developer Cloud to create custom images on Oracle Cloud Infrastructure.

About Packer

HashiCorp Packer automates the creation of any type of machine image. It embraces modern configuration management by encouraging to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities.

You can read more about Packer on https://www.packer.io/

You can find the details of Packer support for Oracle Cloud Infrastructure here.

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would build the image which can be used for provisioning.

Packer: Tool for creating custom images on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would mostly be using OCI here on.

Packer Scripts

To execute the Packer scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload 3 files to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development, so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Note: Ensure that the Git repository is created and you have the HTTPS URL for it.

Below is the folder structure description for the scripts that I have in the Git Repository on Oracle Developer Cloud Service.

Description of the files:

oci_api_key.pem – This is the file required for the OCI access. It contains the SSH private key.

Note: Please refer to the links below for details on OCI key. You will also need the SSH public key to be there

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

build.json: This is the only configuration file that you need for Packer. This JSON file contains all the definitions needed for Packer to create an image on Oracle Cloud Infrastructure. I have truncated the ocids and fingerprint for security reasons.

 

{ "builders": [ { "user_ocid":"ocid1.user.oc1..aaaaaaaa", "tenancy_ocid": "ocid1.tenancy.oc1..aaaaaaaay", "fingerprint":"29:b1:8b:e4:7a:92:ae", "key_file":"oci_api_key.pem", "availability_domain": "PILZ:PHX-AD-1", "region": "us-phoenix-1", "base_image_ocid": "ocid1.image.oc1.phx.aaaaaaaal", "compartment_ocid": "ocid1.compartment.oc1..aaaaaaaahd", "image_name": "RedisOCI", "shape": "VM.Standard1.1", "ssh_username": "ubuntu", "ssh_password": "welcome1", "subnet_ocid": "ocid1.subnet.oc1.phx.aaaaaaaa", "type": "oracle-oci" } ], "provisioners": [ { "type": "shell", "inline": [ "sleep 30", "sudo apt-get update", "sudo apt-get install -y redis-server" ] } ] }

You can give values of your choice for image_name and it is recommended but optional to provide ssh_password. While I have kept ssh_username as “Ubuntu” as my base image OS was Ubuntu. Leave the type and shape as is. The base_image ocid would depend on the region. Different region have different ocid for the base images. Please refer link below to find the ocid for the image as per region.

https://docs.us-phoenix-1.oraclecloud.com/images/

Now login into your OCI console to retrieve some of the details needed for the build.json definitions.

Below screenshot shows where you can retrieve your tenancy_ocid from.

Below screenshot of OCI console shows where you will find the compartment_ocid.

Below screenshot of OCI console shows where you will find the user_ocid.

You can retrieve the region and availability_domain as shown below.

Now select the compartment, which is “packerTest” for this blog, then click on the networking tab and then the VCN you have created. Here you would see a subnet each for the availability_domains. Copy the ocid for the subnet with respect to the availability_domain you have chosen.

Dockerfile: This will install Packer in Docker and run the Packer command to create a custom image on OCI. It pulls the packer:full image, then adds the build.json and oci_api_key.pem files the Docker image and then execute the packer build command.

 

FROM hashicorp/packer:full ADD build.json ./ ADD oci_api_key.pem ./ RUN packer build build.json

 

Configuring the Build VM

With our latest release, you will have to create a build VM with the Docker software bundle, to be able to execute the build for Packer, as we are using Docker to install and run Packer.

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.

 

Build Job Configuration

Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog. 

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

In the Builders tab Docker Builder -> Docker Build from the Add Builder dropdown. You just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Now Click on Save to save the build job configuration.

On execution of the build job, the image gets created in the OCI instance in the defined compartment as shown in the below screenshot.

So now you can easily automate custom image creation on Oracle Cloud Infrastructure using Packer as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Packing!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Infrastructure as Code using Terraform on Oracle Developer Cloud

OTN TechBlog - Wed, 2018-05-09 14:04

With our April release, we have started supporting Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. With our April release, we have started supporting Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. 

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would provision the infrastructure for our usage.

Terraform: Tool for provisioning the infrastructure on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would be using OCI here on.

 

About Terraform

Terraform is a tool which helps you to write, plan and create your infrastructure safely and efficiently. Terraform can manage existing and popular service providers like Oracle, as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. It helps you to build, manage and version your code. To know more about Terraform go to: https://www.terraform.io/

 

Terraform Scripts

To execute the Terraform scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload all the scripts to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Below is the folder structure description for the terraform scripts that I have in the Git Repository on Oracle Developer Cloud Service.

The terraform scripts are inside the exampleTerraform folder and the oci_api_key_public.pem and oci_api_key.pem are the OCI keys.

In the exampleTerraform folder we have all the “tf” extension files along with the env-vars file. You will be able to see the definition of the files later in the blog.

In the “userdata” folder you will have the bootstrap shell script which will be executed when the VM first boots up on OCI.

Below is the description of each file in the folder and the snippet:

env-vars: It is the most important file where we set all the environment variables which will be used by the Terraform scripts for accessing and provisioning the OCI instance.

### Authentication details export TF_VAR_tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaa" export TF_VAR_user_ocid="ocid1.user.oc1..aaaaaaa" export TF_VAR_fingerprint="29:b1:8b:e4:7a:92:ae:d5" export TF_VAR_private_key_path="/home/builder/.terraform.d/oci_api_key.pem" ### Region export TF_VAR_region="us-phoenix-1" ### Compartment ocid export TF_VAR_compartment_ocid="ocid1.tenancy.oc1..aaaa" ### Public/private keys used on the instance export TF_VAR_ssh_public_key=$(cat exampleTerraform/id_rsa.pub) export TF_VAR_ssh_private_key=$(cat exampleTerraform/id_rsa)

Note: all the ocids above are truncated for security and brevity.

Below screenshot(s) of the OCI console shows where to locate these OCIDS:

tenancy_ocid and region

compartment_ocid:

user_ocid:

Point to the path of the RSA files for the SSH connection which are there in the Git repository and the OCI API Key private pem file in the Git repository.

variables.tf: In this file we initialize the terraform variables along with configuring the Instance Image OCID. This could be the ocid for base image available out of the box on OCI instance. These may vary based on the region where your OCI instance has been provisioned. Use this link for knowing more about the OCI base images. Here we also configure the path for the bootstrap file which resides in the userdata folder, which will be executed on boot of the OCI machine.

variable "tenancy_ocid" {} variable "user_ocid" {} variable "fingerprint" {} variable "private_key_path" {} variable "region" {} variable "compartment_ocid" {} variable "ssh_public_key" {} variable "ssh_private_key" {} # Choose an Availability Domain variable "AD" { default = "1" } variable "InstanceShape" { default = "VM.Standard1.2" } variable "InstanceImageOCID" { type = "map" default = { // Oracle-provided image "Oracle-Linux-7.4-2017.12.18-0" // See https://docs.us-phoenix-1.oraclecloud.com/Content/Resources/Assets/OracleProvidedImageOCIDs.pdf us-phoenix-1 = "ocid1.image.oc1.phx.aaaaaaaa3av7orpsxid6zdpdbreagknmalnt4jge4ixi25cwxx324v6bxt5q" //us-ashburn-1 = "ocid1.image.oc1.iad.aaaaaaaaxrqeombwty6jyqgk3fraczdd63bv66xgfsqka4ktr7c57awr3p5a" //eu-frankfurt-1 = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaayxmzu6n5hsntq4wlffpb4h6qh6z3uskpbm5v3v4egqlqvwicfbyq" } } variable "DBSize" { default = "50" // size in GBs } variable "BootStrapFile" { default = "./userdata/bootstrap" }

compute.tf: The display name, compartment ocid, image to be used and the shape and the network parameters need to be configured here , as shown in the code snippet below.

 

resource "oci_core_instance" "TFInstance" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFInstance" image = "${var.InstanceImageOCID[var.region]}" shape = "${var.InstanceShape}" create_vnic_details { subnet_id = "${oci_core_subnet.ExampleSubnet.id}" display_name = "primaryvnic" assign_public_ip = true hostname_label = "tfexampleinstance" }, metadata { ssh_authorized_keys = "${var.ssh_public_key}" } timeouts { create = "60m" } }

network.tf: Here we have the Terraform script for creating VCN, Subnet, Internet Gateway and Route table. These are vital for the creation and access of the compute instance that we provision.

resource "oci_core_virtual_network" "ExampleVCN" { cidr_block = "10.1.0.0/16" compartment_id = "${var.compartment_ocid}" display_name = "TFExampleVCN" dns_label = "tfexamplevcn" } resource "oci_core_subnet" "ExampleSubnet" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" cidr_block = "10.1.20.0/24" display_name = "TFExampleSubnet" dns_label = "tfexamplesubnet" security_list_ids = ["${oci_core_virtual_network.ExampleVCN.default_security_list_id}"] compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" route_table_id = "${oci_core_route_table.ExampleRT.id}" dhcp_options_id = "${oci_core_virtual_network.ExampleVCN.default_dhcp_options_id}" } resource "oci_core_internet_gateway" "ExampleIG" { compartment_id = "${var.compartment_ocid}" display_name = "TFExampleIG" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" } resource "oci_core_route_table" "ExampleRT" { compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" display_name = "TFExampleRouteTable" route_rules { cidr_block = "0.0.0.0/0" network_entity_id = "${oci_core_internet_gateway.ExampleIG.id}" } }

block.tf: The below script defines the boot volumes for the compute instance getting provisioned.

resource "oci_core_volume" "TFBlock0" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFBlock0" size_in_gbs = "${var.DBSize}" } resource "oci_core_volume_attachment" "TFBlock0Attach" { attachment_type = "iscsi" compartment_id = "${var.compartment_ocid}" instance_id = "${oci_core_instance.TFInstance.id}" volume_id = "${oci_core_volume.TFBlock0.id}" }

provider.tf: In the provider script the OCI details are set.

 

provider "oci" { tenancy_ocid = "${var.tenancy_ocid}" user_ocid = "${var.user_ocid}" fingerprint = "${var.fingerprint}" private_key_path = "${var.private_key_path}" region = "${var.region}" disable_auto_retries = "true" }

datasources.tf: Defines the data sources used in the configuration

# Gets a list of Availability Domains data "oci_identity_availability_domains" "ADs" { compartment_id = "${var.tenancy_ocid}" } # Gets a list of vNIC attachments on the instance data "oci_core_vnic_attachments" "InstanceVnics" { compartment_id = "${var.compartment_ocid}" availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" instance_id = "${oci_core_instance.TFInstance.id}" } # Gets the OCID of the first (default) vNIC data "oci_core_vnic" "InstanceVnic" { vnic_id = "${lookup(data.oci_core_vnic_attachments.InstanceVnics.vnic_attachments[0],"vnic_id")}" }

outputs.tf: It defines the output of the configuration, which is public and private IP of the provisioned instance.

# Output the private and public IPs of the instance output "InstancePrivateIP" { value = ["${data.oci_core_vnic.InstanceVnic.private_ip_address}"] } output "InstancePublicIP" { value = ["${data.oci_core_vnic.InstanceVnic.public_ip_address}"] }

remote-exec.tf: Uses a null_resource, remote-exec and depends on to execute a command on the instance.

resource "null_resource" "remote-exec" { depends_on = ["oci_core_instance.TFInstance","oci_core_volume_attachment.TFBlock0Attach"] provisioner "remote-exec" { connection { agent = false timeout = "30m" host = "${data.oci_core_vnic.InstanceVnic.public_ip_address}" user = "ubuntu" private_key = "${var.ssh_private_key}" } inline = [ "touch ~/IMadeAFile.Right.Here", "sudo iscsiadm -m node -o new -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port}", "sudo iscsiadm -m node -o update -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -n node.startup -v automatic", "echo sudo iscsiadm -m node -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port} -l >> ~/.bashrc" ] } }

Oracle Infrastructure Cloud - Configuration

The major configuration that need to be done on OCI is for the security for Terraform to be able work and provision an instance.

Click the username on top of the Oracle Cloud Infrastructure console, you will see a drop down, select User Settings from it.

Now click on the “Add Public Key” button, to get the dialog where you can copy paste the oci_api_key.pem(the key) in it and click on the Add button.

Note: Please refer to the links below for details on OCI key.

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

Configuring the Build VM

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”.

On creation of the template click on “Configure Software” button.

Select Terraform from the list of software bundles avaibale for configuration and click on the + sign to add it to the template.

Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “terraformTemplate” for our blog.

Build Job Configuration

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

Select the Unix Shell Builder form the Add Builder dropdown. Then add the script as below. The below script would first configure the environment variables using env-vars. Then copy the oci_api_key.pem and oci_api_key_public.pem to the specified directory. Then execute the Terraform commands to provision the OCI instance. The important commands are terraform init, terraform plan and terraform apply.

terraform init – The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

terraform plan – The terraform plan command is used to create an execution plan. 

terraform apply – The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

Post the execution it prints the IP addresses of the provisioned instance as output. And then tries to make a SSH connection to the machine using the RSA keys supplied in the exampleTerraform folder.

Configure Artifact Archiver to archive the terraform.tfstate file which would get generated as part of the build execution. You may select the compression to GZIP or NONE.

Post Build Job Execution

In build log you will be able to see the private and public IP addresses for the instance provisioned by Terraform scripts and then try to make an SSH connection to it. If everything goes fine, you the build job should complete successfully. 

Now you can go to the Oracle Cloud Infrastructure console to see the instance has already being created for you along with network and boot volumes as defined in the Terraform scripts.  

So now you can easily automate provisioning of Oracle Cloud Infrastructure using Terraform as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Why Are You So Quiet?

Shay Shmeltzer - Wed, 2018-05-09 12:19

You might have noticed that this blog didn't post new entries in the past couple of months, and you might have wondered why.

Well the answer is that I've been publishing content on some other related blogs around the Oracle blogsphere.

If you want to read those have a look at my author page here:

https://blogs.oracle.com/author/shay-shmeltzer

As you'll see we have new versions of both Visual Builder Cloud Service and Developer Cloud Service - both with extensive updates to functionality.

Working and learning those new versions and producing some demos is another reason I wasn't that active here lately.

That being said, now that both are out there - you are going to see more blogs coming from me.

But as mentioned at the top - these might be published in other blogs too.

So to keep up to date you might want to subscribe to this feed:

https://blogs.oracle.com/author/shay-shmeltzer/rss

See you around,

Shay

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator