Feed aggregator

Oracle Construction and Engineering Enables Earned Value Management to Improve Project Delivery

Oracle Press Releases - Thu, 2018-05-10 07:00
Press Release
Oracle Construction and Engineering Enables Earned Value Management to Improve Project Delivery Enhancements to Oracle’s Primavera Unifier deliver new levels of visibility into project progress and performance

Redwood Shores, Calif.—May 10, 2018

Oracle Construction and Engineering today announced enhancements to Oracle’s Primavera Unifier that enable users to perform earned value management (EVM) to better analyze the progress and performance of projects.

Earned value, a critical dimension of the execution of large and complex projects, provides an integrated view of progress that encompasses cost, scope, and schedule, enabling deeper project analysis and more intelligent decision-making. The EVM methodology entails comparing the amount and cost of what was planned to be completed against what work has actually been completed, and how much that work has cost. Such a comparison enables greater precision in forecasting the final cost of the project and whether it will be completed on, behind, or ahead of schedule.

With evolving government standards and securities laws increasing pressure to adopt stringent cost and earned-value standards, many organizations today recognize the need to incorporate comprehensive cost management and earned-value analysis capabilities into their project portfolio management systems.

The new Primavera Unifier EVM capability allows users to leverage data from Primavera P6 Enterprise Project Portfolio Management to:

  • Import multiple projects from Primavera P6 EPPM into a single Primavera Unifier project activity sheet, creating a consolidated view of the costs and earned value. The new EVM capability in Primavera Unifier incorporates resource spreads and progress information from the Primavera P6 EPPM schedule data.
  • Create rate sheets by resource and role with escalating rates. Rate sheets can also be created at a company or project level and be assigned to a mirror of the Primavera P6 EPPM projects within Primavera Unifier through the activity sheets. This allows different rates to be assigned to each P6 project and even to the P6 project baselines.
  • Pull data from the activity sheet into the EVM module, which will display industry standard graphics in addition to various critical project metrics, including historical trending.
 

“Earned value management is an increasingly important project delivery process that enables organizations to understand key dimensions of project progress and performance. The data that the new EVM capability in Oracle’s Primavera Unifier yields will enable project delivery professionals to improve outcomes through better visibility and smarter decision making,” said Andy Verone, Vice President of Strategy for Oracle Construction and Engineering.

For more information about these new enhancements to Oracle’s Primavera Unifier, register to attend a webinar on EVM and Oracle.

Contact Info
Judi Palmer
Oracle
+1 650 506 0266
judi.palmer@oracle.com
Kristin Reeves
Blanc and Otus
+1 925 787 6744
Kristin.reeves@blancandotus.com
About Oracle Construction and Engineering

Oracle Construction and Engineering helps companies reimagine their businesses. With best-in-class project management solutions, organizations can proactively manage projects, gain complete visibility, improve collaboration, and manage change. Our cloud-based solutions for global project planning and execution can help improve strategy execution, operations, and financial performance. For more information, please visit www.oracle.com/construction-and-engineering.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 506 0266

Kristin Reeves

  • +1 925 787 6744

New OA Framework 12.2.6 Update 12 Now Available

Steven Chan - Thu, 2018-05-10 06:00

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure.

We periodically release updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.6 is now available:

Oracle Application Framework (FWK) Release 12.2.6 Bundle 12 (Patch 27675364:R12.FWK.C)

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.6 users should apply this patch. Future OAF patches for EBS Release 12.2.6 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.6 bundle patches.

In addition, this latest bundle patch includes fixes for the following issues:

  • The Publish column is displayed on the inline attachment popup window even if the Document Catalog option is disabled.
  • Breadcrumbs are not wrapped, which results in the appearance of a horizontal scroll bar on the page.
  • Users are unable to add inline attachments when creating invoices.

Related Articles

Categories: APPS Blogs

Build Oracle Cloud Infrastructure custom Images with Packer on Oracle Developer Cloud

OTN TechBlog - Wed, 2018-05-09 15:55

In the April release of Oracle Developer Cloud Service we started supporting Docker and Terraform builds as part of the CI & CD pipeline. Terraform helps you provision Oracle Cloud Infrastructure instance as part of the build pipeline. But what if you want to provision the instance using a custom image instead of the base image? You need a tool like Packer to script your way into building images. So with Docker build support we can now build Packer based images as part of build pipeline in Oracle Developer Cloud. This blog will help you to understand how you can use Docker and Packer together on Developer Cloud to create custom images on Oracle Cloud Infrastructure.

About Packer

HashiCorp Packer automates the creation of any type of machine image. It embraces modern configuration management by encouraging to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities.

You can read more about Packer on https://www.packer.io/

You can find the details of Packer support for Oracle Cloud Infrastructure here.

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would build the image which can be used for provisioning.

Packer: Tool for creating custom images on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would mostly be using OCI here on.

Packer Scripts

To execute the Packer scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload 3 files to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development, so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Note: Ensure that the Git repository is created and you have the HTTPS URL for it.

Below is the folder structure description for the scripts that I have in the Git Repository on Oracle Developer Cloud Service.

Description of the files:

oci_api_key.pem – This is the file required for the OCI access. It contains the SSH private key.

Note: Please refer to the links below for details on OCI key. You will also need the SSH public key to be there

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

build.json: This is the only configuration file that you need for Packer. This JSON file contains all the definitions needed for Packer to create an image on Oracle Cloud Infrastructure. I have truncated the ocids and fingerprint for security reasons.

 

{ "builders": [ { "user_ocid":"ocid1.user.oc1..aaaaaaaa", "tenancy_ocid": "ocid1.tenancy.oc1..aaaaaaaay", "fingerprint":"29:b1:8b:e4:7a:92:ae", "key_file":"oci_api_key.pem", "availability_domain": "PILZ:PHX-AD-1", "region": "us-phoenix-1", "base_image_ocid": "ocid1.image.oc1.phx.aaaaaaaal", "compartment_ocid": "ocid1.compartment.oc1..aaaaaaaahd", "image_name": "RedisOCI", "shape": "VM.Standard1.1", "ssh_username": "ubuntu", "ssh_password": "welcome1", "subnet_ocid": "ocid1.subnet.oc1.phx.aaaaaaaa", "type": "oracle-oci" } ], "provisioners": [ { "type": "shell", "inline": [ "sleep 30", "sudo apt-get update", "sudo apt-get install -y redis-server" ] } ] }

You can give values of your choice for image_name and it is recommended but optional to provide ssh_password. While I have kept ssh_username as “Ubuntu” as my base image OS was Ubuntu. Leave the type and shape as is. The base_image ocid would depend on the region. Different region have different ocid for the base images. Please refer link below to find the ocid for the image as per region.

https://docs.us-phoenix-1.oraclecloud.com/images/

Now login into your OCI console to retrieve some of the details needed for the build.json definitions.

Below screenshot shows where you can retrieve your tenancy_ocid from.

Below screenshot of OCI console shows where you will find the compartment_ocid.

Below screenshot of OCI console shows where you will find the user_ocid.

You can retrieve the region and availability_domain as shown below.

Now select the compartment, which is “packerTest” for this blog, then click on the networking tab and then the VCN you have created. Here you would see a subnet each for the availability_domains. Copy the ocid for the subnet with respect to the availability_domain you have chosen.

Dockerfile: This will install Packer in Docker and run the Packer command to create a custom image on OCI. It pulls the packer:full image, then adds the build.json and oci_api_key.pem files the Docker image and then execute the packer build command.

 

FROM hashicorp/packer:full ADD build.json ./ ADD oci_api_key.pem ./ RUN packer build build.json

 

Configuring the Build VM

With our latest release, you will have to create a build VM with the Docker software bundle, to be able to execute the build for Packer, as we are using Docker to install and run Packer.

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.

 

Build Job Configuration

Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog. 

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

In the Builders tab Docker Builder -> Docker Build from the Add Builder dropdown. You just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Now Click on Save to save the build job configuration.

On execution of the build job, the image gets created in the OCI instance in the defined compartment as shown in the below screenshot.

So now you can easily automate custom image creation on Oracle Cloud Infrastructure using Packer as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Packing!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Infrastructure as Code using Terraform on Oracle Developer Cloud

OTN TechBlog - Wed, 2018-05-09 14:04

With our April release, we have started supporting Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. With our April release, we have started supporting Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. 

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would provision the infrastructure for our usage.

Terraform: Tool for provisioning the infrastructure on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would be using OCI here on.

 

About Terraform

Terraform is a tool which helps you to write, plan and create your infrastructure safely and efficiently. Terraform can manage existing and popular service providers like Oracle, as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. It helps you to build, manage and version your code. To know more about Terraform go to: https://www.terraform.io/

 

Terraform Scripts

To execute the Terraform scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload all the scripts to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Below is the folder structure description for the terraform scripts that I have in the Git Repository on Oracle Developer Cloud Service.

The terraform scripts are inside the exampleTerraform folder and the oci_api_key_public.pem and oci_api_key.pem are the OCI keys.

In the exampleTerraform folder we have all the “tf” extension files along with the env-vars file. You will be able to see the definition of the files later in the blog.

In the “userdata” folder you will have the bootstrap shell script which will be executed when the VM first boots up on OCI.

Below is the description of each file in the folder and the snippet:

env-vars: It is the most important file where we set all the environment variables which will be used by the Terraform scripts for accessing and provisioning the OCI instance.

### Authentication details export TF_VAR_tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaa" export TF_VAR_user_ocid="ocid1.user.oc1..aaaaaaa" export TF_VAR_fingerprint="29:b1:8b:e4:7a:92:ae:d5" export TF_VAR_private_key_path="/home/builder/.terraform.d/oci_api_key.pem" ### Region export TF_VAR_region="us-phoenix-1" ### Compartment ocid export TF_VAR_compartment_ocid="ocid1.tenancy.oc1..aaaa" ### Public/private keys used on the instance export TF_VAR_ssh_public_key=$(cat exampleTerraform/id_rsa.pub) export TF_VAR_ssh_private_key=$(cat exampleTerraform/id_rsa)

Note: all the ocids above are truncated for security and brevity.

Below screenshot(s) of the OCI console shows where to locate these OCIDS:

tenancy_ocid and region

compartment_ocid:

user_ocid:

Point to the path of the RSA files for the SSH connection which are there in the Git repository and the OCI API Key private pem file in the Git repository.

variables.tf: In this file we initialize the terraform variables along with configuring the Instance Image OCID. This could be the ocid for base image available out of the box on OCI instance. These may vary based on the region where your OCI instance has been provisioned. Use this link for knowing more about the OCI base images. Here we also configure the path for the bootstrap file which resides in the userdata folder, which will be executed on boot of the OCI machine.

variable "tenancy_ocid" {} variable "user_ocid" {} variable "fingerprint" {} variable "private_key_path" {} variable "region" {} variable "compartment_ocid" {} variable "ssh_public_key" {} variable "ssh_private_key" {} # Choose an Availability Domain variable "AD" { default = "1" } variable "InstanceShape" { default = "VM.Standard1.2" } variable "InstanceImageOCID" { type = "map" default = { // Oracle-provided image "Oracle-Linux-7.4-2017.12.18-0" // See https://docs.us-phoenix-1.oraclecloud.com/Content/Resources/Assets/OracleProvidedImageOCIDs.pdf us-phoenix-1 = "ocid1.image.oc1.phx.aaaaaaaa3av7orpsxid6zdpdbreagknmalnt4jge4ixi25cwxx324v6bxt5q" //us-ashburn-1 = "ocid1.image.oc1.iad.aaaaaaaaxrqeombwty6jyqgk3fraczdd63bv66xgfsqka4ktr7c57awr3p5a" //eu-frankfurt-1 = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaayxmzu6n5hsntq4wlffpb4h6qh6z3uskpbm5v3v4egqlqvwicfbyq" } } variable "DBSize" { default = "50" // size in GBs } variable "BootStrapFile" { default = "./userdata/bootstrap" }

compute.tf: The display name, compartment ocid, image to be used and the shape and the network parameters need to be configured here , as shown in the code snippet below.

 

resource "oci_core_instance" "TFInstance" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFInstance" image = "${var.InstanceImageOCID[var.region]}" shape = "${var.InstanceShape}" create_vnic_details { subnet_id = "${oci_core_subnet.ExampleSubnet.id}" display_name = "primaryvnic" assign_public_ip = true hostname_label = "tfexampleinstance" }, metadata { ssh_authorized_keys = "${var.ssh_public_key}" } timeouts { create = "60m" } }

network.tf: Here we have the Terraform script for creating VCN, Subnet, Internet Gateway and Route table. These are vital for the creation and access of the compute instance that we provision.

resource "oci_core_virtual_network" "ExampleVCN" { cidr_block = "10.1.0.0/16" compartment_id = "${var.compartment_ocid}" display_name = "TFExampleVCN" dns_label = "tfexamplevcn" } resource "oci_core_subnet" "ExampleSubnet" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" cidr_block = "10.1.20.0/24" display_name = "TFExampleSubnet" dns_label = "tfexamplesubnet" security_list_ids = ["${oci_core_virtual_network.ExampleVCN.default_security_list_id}"] compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" route_table_id = "${oci_core_route_table.ExampleRT.id}" dhcp_options_id = "${oci_core_virtual_network.ExampleVCN.default_dhcp_options_id}" } resource "oci_core_internet_gateway" "ExampleIG" { compartment_id = "${var.compartment_ocid}" display_name = "TFExampleIG" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" } resource "oci_core_route_table" "ExampleRT" { compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" display_name = "TFExampleRouteTable" route_rules { cidr_block = "0.0.0.0/0" network_entity_id = "${oci_core_internet_gateway.ExampleIG.id}" } }

block.tf: The below script defines the boot volumes for the compute instance getting provisioned.

resource "oci_core_volume" "TFBlock0" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFBlock0" size_in_gbs = "${var.DBSize}" } resource "oci_core_volume_attachment" "TFBlock0Attach" { attachment_type = "iscsi" compartment_id = "${var.compartment_ocid}" instance_id = "${oci_core_instance.TFInstance.id}" volume_id = "${oci_core_volume.TFBlock0.id}" }

provider.tf: In the provider script the OCI details are set.

 

provider "oci" { tenancy_ocid = "${var.tenancy_ocid}" user_ocid = "${var.user_ocid}" fingerprint = "${var.fingerprint}" private_key_path = "${var.private_key_path}" region = "${var.region}" disable_auto_retries = "true" }

datasources.tf: Defines the data sources used in the configuration

# Gets a list of Availability Domains data "oci_identity_availability_domains" "ADs" { compartment_id = "${var.tenancy_ocid}" } # Gets a list of vNIC attachments on the instance data "oci_core_vnic_attachments" "InstanceVnics" { compartment_id = "${var.compartment_ocid}" availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" instance_id = "${oci_core_instance.TFInstance.id}" } # Gets the OCID of the first (default) vNIC data "oci_core_vnic" "InstanceVnic" { vnic_id = "${lookup(data.oci_core_vnic_attachments.InstanceVnics.vnic_attachments[0],"vnic_id")}" }

outputs.tf: It defines the output of the configuration, which is public and private IP of the provisioned instance.

# Output the private and public IPs of the instance output "InstancePrivateIP" { value = ["${data.oci_core_vnic.InstanceVnic.private_ip_address}"] } output "InstancePublicIP" { value = ["${data.oci_core_vnic.InstanceVnic.public_ip_address}"] }

remote-exec.tf: Uses a null_resource, remote-exec and depends on to execute a command on the instance.

resource "null_resource" "remote-exec" { depends_on = ["oci_core_instance.TFInstance","oci_core_volume_attachment.TFBlock0Attach"] provisioner "remote-exec" { connection { agent = false timeout = "30m" host = "${data.oci_core_vnic.InstanceVnic.public_ip_address}" user = "ubuntu" private_key = "${var.ssh_private_key}" } inline = [ "touch ~/IMadeAFile.Right.Here", "sudo iscsiadm -m node -o new -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port}", "sudo iscsiadm -m node -o update -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -n node.startup -v automatic", "echo sudo iscsiadm -m node -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port} -l >> ~/.bashrc" ] } }

Oracle Infrastructure Cloud - Configuration

The major configuration that need to be done on OCI is for the security for Terraform to be able work and provision an instance.

Click the username on top of the Oracle Cloud Infrastructure console, you will see a drop down, select User Settings from it.

Now click on the “Add Public Key” button, to get the dialog where you can copy paste the oci_api_key.pem(the key) in it and click on the Add button.

Note: Please refer to the links below for details on OCI key.

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

Configuring the Build VM

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”.

On creation of the template click on “Configure Software” button.

Select Terraform from the list of software bundles avaibale for configuration and click on the + sign to add it to the template.

Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “terraformTemplate” for our blog.

Build Job Configuration

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

Select the Unix Shell Builder form the Add Builder dropdown. Then add the script as below. The below script would first configure the environment variables using env-vars. Then copy the oci_api_key.pem and oci_api_key_public.pem to the specified directory. Then execute the Terraform commands to provision the OCI instance. The important commands are terraform init, terraform plan and terraform apply.

terraform init – The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

terraform plan – The terraform plan command is used to create an execution plan. 

terraform apply – The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

Post the execution it prints the IP addresses of the provisioned instance as output. And then tries to make a SSH connection to the machine using the RSA keys supplied in the exampleTerraform folder.

Configure Artifact Archiver to archive the terraform.tfstate file which would get generated as part of the build execution. You may select the compression to GZIP or NONE.

Post Build Job Execution

In build log you will be able to see the private and public IP addresses for the instance provisioned by Terraform scripts and then try to make an SSH connection to it. If everything goes fine, you the build job should complete successfully. 

Now you can go to the Oracle Cloud Infrastructure console to see the instance has already being created for you along with network and boot volumes as defined in the Terraform scripts.  

So now you can easily automate provisioning of Oracle Cloud Infrastructure using Terraform as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Why Are You So Quiet?

Shay Shmeltzer - Wed, 2018-05-09 12:19

You might have noticed that this blog didn't post new entries in the past couple of months, and you might have wondered why.

Well the answer is that I've been publishing content on some other related blogs around the Oracle blogsphere.

If you want to read those have a look at my author page here:

https://blogs.oracle.com/author/shay-shmeltzer

As you'll see we have new versions of both Visual Builder Cloud Service and Developer Cloud Service - both with extensive updates to functionality.

Working and learning those new versions and producing some demos is another reason I wasn't that active here lately.

That being said, now that both are out there - you are going to see more blogs coming from me.

But as mentioned at the top - these might be published in other blogs too.

So to keep up to date you might want to subscribe to this feed:

https://blogs.oracle.com/author/shay-shmeltzer/rss

See you around,

Shay

Categories: Development

Developing for the Oracle Database

Dominic Brooks - Wed, 2018-05-09 09:49

1 Coding Standards
What this document does not claim to offer is a prescriptive guide on the minutiae of coding standards.

Coding standards and naming conventions, particularly in SQL and PL/SQL, are an emotive and divisive subject.

This is largely the fault of a historical absence of mature IDEs and a lack of any rigorous, generally accepted standards as seen with other languages.

Many developers still hand-craft scripts and routines and the various tools available often have differences, subtle or otherwise, in what built-in formatting they can offer.

Good developers can adapt, good standards can adapt.

The most important objectives for coding standards are to :
• Make development faster and debugging easier
• Make easier the understanding of other people’s code
• Limit bugs

The following expectations support these objectives:
• Where possible use SQL before using PLSQL
• Code will be reasonably formatted and legible, preferably with a consistent style within the module at least.
• It is preferred but not mandated that Oracle keywords be capitalized – e.g. CREATE OR REPLACE PACKAGE, SELECT, UPDATE, DELETE, FROM – and lowercase used for objects and columns, etc
• In SELECT statements, tables should be aliased – this provides a very small benefit to the optimizer when parsing but also prevents bugs particularly in subqueries.
• Procedures, cursors and variables, etc should be properly scoped – e.g. public vs private, global vs local, parameter scope, etc
• Meaningful names will be given for code items
• Reasonable naming might include some sort of prefixed or suffixed indicator of purpose, e.g. k_ for constants, l_ or v_ for local variables, g_ for global variables, p_ for procedure, f_ for function, _pkg for package, i_ for in parameters, o_for out parameters, io_ for in out parameters.
• Package and procedure level comments should indicate why/when a particular program was changed but SVN, or other code respositories, are the appropriate mechanism for code control.
• Code comments should be used when they add value.
• Excessive commenting and stating the obvious should be avoided – these are often more effective when refactoring the code concerned into a private routine (procedure/function) which is named appropriately (e.g. function f_format_swift_string).
• CamelCase is not considered appropriate for the database as all code is stored in the db as uppercase.
• Package headers and bodies should be checked into separate files for clarity and to prevent unnecessary recompilcation of unchanged code and dependencies (version dependent)
• Performance should be built in and evidence of such documented.

2 Writing Optimal SQL

2.1 Key points
Writing Optimal SQL should be relatively simple but many people struggle particularly when making the transition from an object/attribute language to a set-based language like SQL.

The key tenets of performant database code in Oracle are:
• Think in sets.
• Think about how the database might be able to process and apply logic to a set of data with great efficiency.
• Ask the right question in the best way.
• Know your data.

In support, when thinking about database code and SQL operations:

• If the query needs to be long/big, make it long/big.
• Bulk operations are critical, row-by-row operations are a cardinal performance sin.
• Eliminate data at the earliest opportunity.
• Sort on the smallest possible set – if possible avoid aggregations, sorting and distinct operations on the largest sets of data.
• Use bind variable when you require shareable SQL and when bind variables make sense
• Use literals when literals make sense.
• Use a mix of binds and literals if appropriate.
• Avoid PL/SQL in SQL.
• Be careful of applying functions (TRUNC, etc) to columns in the WHERE clause.
• User-defined functions which are called from SQL and which themselves contain SQL are, almost without exception, unacceptable.
• Never rely on implicit datatype conversion. Use the correct datatypes for parameters and where possible convert parameters NOT columns.

2.2 Thinking in Sets

Crucial.

For further reading on thinking in sets, see:
http://explainextended.com/2009/07/12/double-thinking-in-sql/

2.3 What’s the question?

When writing SQL, focus on the question being asked by the SQL statement.

If you put the question into words as a comment before a complex SQL statement, then this can often add value to the next developer.

Often the most performant version of a SQL statement is the one which asks the question at hand in the most natural way.

To this end, proper consideration needs to be given to:
• Subqueries – EXISTS / IN / NOT EXISTS / NOT IN
• Set-operators – MINUS, UNION, UNION ALL, INTERSECT
• Use of DISTINCT is often an indication of a wrong sql statement or poor design
• Common Table Expressions:

Often it can help to use of Common Table Expressions (CTE), aka the WITH clause, for separating the main logic of the query from the subsequent fetching of additional data/attributes, e.g.

WITH main_logic AS
(SELECT …
FROM …
WHERE …)
SELECT ml.*, x.this, y.that, z.something_else
FROM main_logic ml
, …
WHERE …;

2.4 To ANSI or Not To ANSI

Another divisive subject is ANSI SQL vs Oracle Join syntax.

Again, this guide should not seek to be prescriptive on the preference of one over the other.

The bottom line should be that if a developer finds it easier to write a correct and optimal SQL statement using one rather than the other, then that is most important.

There are some SQL statement constructs which are more conveniently written in ANSI – the FULL OUTER JOIN for example.

It is also true that the optimizer always transforms ANSI SQL to the equivalent Oracle syntax and there are some limitations to the optimizer’s other complex query transformations when using ANSI SQL.

And unfortunately there are bugs in both.

2.5 Eliminate Early

Where there are predicates (WHERE clauses) which can significantly reduce the dataset early, check that they are being applied early enough in the execution plan (more information to follow), check whether the SQL statement might be rephrased or reconstructed (CTE/WITH) to make sure they are applied at an appropriate stage.

2.6 Sort / Aggregate on the smallest possible dataset

Similar to eliminate early. Sorting and aggregating requires memory and under certain conditions can spill to expensive (unscalable) disk operations.
Wherever possible, do the sort or aggregation on the smallest set of rows (not necessarily applicable to the order by clause of a query).

2.7 What’s the big deal with PL/SQL functions called from SQL?

The bottom line is that it’s about performance.

We could get in a whole argument about reusability vs performance but performance eventually wins in the end.

Often the correct mechanism for reusability in the Oracle database is not a function but a view joined to appropriately in the main SQL.

Functions cause a relatively expensive context switch between SQL and PLSQL engines.

In the name of reusability, functions encourage row-by-row operations and discourage thinking in sets.

If the function itself contains SQL, then this SQL will not be part of the read consistency mechanism of the calling statements which can be potentially problematic.

If you absolutely have to, have to, have to use functions in SQL, then think again.

Then if you really, really do then please look at deterministic functions and consider wrapping the function call in a (select from dual) to expose the potential benefits of subquery caching for functions called with repeated parameters.

2.8 What about simple functions like TRUNC in the WHERE clause?

Using functions on columns in the WHERE clause can prevent the optimizer from using an index or from pruning a partition unless a function-based index is in place on the column.

For this reason, it is often best to avoid this sort of construct:
WHERE TRUNC(some_date_column) = TO_DATE(’01-NOV-2013’,’DD-MON-YYYY’)

In favour of this:
WHERE some_date_column) >= TO_DATE(’01-NOV-2013’,’DD-MON-YYYY’)
AND some_date_column) < TO_DATE(’02-NOV-2013’,’DD-MON-YYYY’)

2.9 Using the correct datatype, be explicit

Performance problems related to using the incorrect datatypes are common.

The optimizer will implicitly add functions to make sure the datatypes on both sides of the predicate match.

Always convert date-like parameters to DATEs where the column datatype is also DATE.

Never rely on implicit datatype conversion.

3 Execution Plans & Metrics – What we want, why we want it and how to get it
We have flown through some aspects of how to have a better chance of writing an optimal SQL statement.

3.1 How can we tell if it’s optimal?

Run it.

Run it twice to rule out the effect of uncontrollable factors like OS caching, SAN caching, etc.

Run it on representative data.

Run it on current volumes.

Run it on expected future volumes.

Then what?

In order to validate that our SQL statement is likely to have effective performance, what we want is the actual execution plan used and preferably the actual rowsource metrics.

3.2 How?

3.2.1 Serial (i.e. non-parallel) Execution Plans

In general, the following is usually a good approach across a variety of tools – SQL Developer, Toad and SQL*Plus for example:
Alter session set statistics_level = all;
–bind setup
Var bind1 number
Exec :bind1 := …;

–run target sql statement
select ….
— fetch execution plan and metrics
select * from table(dbms_xplan.display_cursor(null,null,’allstats last’));

Then run as a script.

Firstly, for getting actual execution metrics we can do one of two things prior to running the SQL statement concerned:
1. Add the /*+ gather_plan_statistics */ hint to the SQL or
2. In the same session, run alter session set statistics_level = all;

Then run the target SQL and immediately afterwards run this select:

select * from table(dbms_xplan.display_cursor(null,null,’allstats last’));

This is a convenient wrapper to get the execution plan and metrics from V$SQL_PLAN.

The first parameter is SQL_ID and by passing in NULL, we default to the previous SQL_ID run in this session.

The second parameter is CHILD_CURSOR_NO and this should be the previous child_id for the previous sql_id.
The third parameter is the FORMAT and ‘ALLSTATS LAST’ format says to get all statistics for the last execution.

If this works this should produce an output which is examined in more detail in section 6.

3.2.2 What if this doesn’t work?

If you find you don’t have privilege to run these commands – you need access to V$SESSION for example to use DBMS_XPLAN.DISPLAY_CURSOR – then you need privilege. There is no reason for privilege not to be given.

Otherwise the approach above is effective 90% of the time.

For parallel execution plans, see section 3.2.3 below.

However, in addition and specifically to SQL Developer, there are some recursive operations run by the tool which means that SQL Developer runs some internal commands such that when our DBMS_XPLAN statement runs, the previous SQL ID is no longer our target SQL statement.

There is one such example in SQL Developer 3 related to timestamp columns which affects the test script when running everything as a script (F5). In this case, there are two alternatives. Firstly, run the individual commands in SQL Developer as Run Statement (F9 / Ctrl + Enter). Alternatively, just comment out the timestamp columns in the SELECT part of the statement, for the purposes of this exercise.

Furthermore, in SQL Developer 4 there are further changes to recursive operations which seem to affect some statements.

In all such cases, if the output of the DBMS_XPLAN.DISPLAY_CURSOR is not the execution plan of the statement being profiled then the approach should be to identify the SQL statement in the shared pool (Look for matching SQL_TEXT in V$SQL) and plug the specific SQL_ID into the first argument of the DBMS_XPLAN call (no need to rerun the target SQL statement).

3.2.3 Parallel Execution Plans

For parallel execution plans, the approach of using DBMS_XPLAN.DISPLAY_CURSOR with the format of ‘ALLSTATS LAST’ is not appropriate because it fetches the execution metrics from the last execution of the statement – which is the Query Coordinator (QC) and does not include the metrics of the parallel slaves.

A better approach for parallel execution plans is to use real time sql monitoring and the easiest way to do this is to run the following and capture the output report:
select dbms_sqltune.report_sql_monitor(‘’) from dual;

This requires you to identify the SQL_ID of the target sql statement from V$SQL (seek matching text in SQL_TEXT/SQL_

FULLTEXT column).
It may also require you to add the /*+ monitor */ hint to your SQL statement as by default this only kicks in on executions which last longer than a default number of seconds (2?) and for statements which are less than a certain length

3.2.4 When all else fails

Fall back on SQL Trace.
Alter session set sql_trace = true;
SELECT….
Alter session set sql_trace = false;

This produces a trace file on the database server and the trace file name can be identified by:
select * from v$diag_info where name = ‘Default Trace File’;

This can be run through TKPROF to get the execution metrics but TKPROF can also lie about the execution plan so this should be double checked in V$SQL_PLAN or by using DBMS_XPLAN.

In rare circumstances and if all the above alternatives are unavailable or impractical for some reason, only then might EXPLAIN PLAN or AUTOTRACE be acceptable.

For example, in any modern version of Oracle, you can do the following
explain plan for select…;
select * from table(dbms_xplan.display);

Now this is not useless but, for numerous reasons, EXPLAIN PLAN cannot be trusted and is not sufficient for our purposes.

AUTOTRACE also does not tell the truth (because it itself relies on EXPLAIN PLAN).
EXPLAIN PLAN is an estimate of what the execution plan for a SQL statement will be.

It doesn’t peek at binds.

It assumes all binds are VARCHAR2.

3.3 Why are we doing this?

We want this information documented as part of the change, attached to the Jira or whatever tool using for change management and included in any code review.

The most effective mechanism for tuning SQL is “Tuning by Cardinality Feedback”: http://www.centrexcc.com/Tuning%20by%20Cardinality%20Feedback.pdf

This follows the principle that:
“if an access plan is not optimal it is because the cardinality estimate for one or more of the row sources is grossly incorrect”
and
“the cbo (cost-based optimizer) does an excellent job of finding the best access plan for a given sql provided it is able to accurately estimate the cardinalities of the row sources in the plan”

By gathering the actual execution plan and the actual execution metrics, we can show whether the optimizer was accurate in its estimations and if it was accurate, then, from a developer perspective and for the purposes of most code reviews, there is a good likelihood that the SQL is good enough for the optimizer to do a good job with.

4 Interpretation of Execution Plans and Execution Metrics

If we’ve been lucky we should have the actual execution plan and the executions metrics.

4.1 What are we looking for? How do we interpret it?

Providing a thorough guide on how to interpret most variations of execution plans is beyond the scope of this guide, although we ill provide a basic guide in Appendix A.

Essentially, what we want to see in the execution metrics is that the optimizer’s estimates are broadly accurate.

How accurate?

In general, we shouldn’t necessarily be overly concerned until we get to a factor of 10x or even more.

Estimated 100K rows, Actual 1M rows – probably not too bothered.

Estimate of 1 row, Actual 10000 rows – likely to be significant inefficiencies in either join order, join mechanism and/or access path.

And when we are looking at estimates vs actual, we need to consider the “Starts” so what we are looking for is that “Starts * E-rows” is in the right ballpark compared to “A-rows”. For more information, please see Appendix A.

Here are a couple of examples:
SQL_ID fst03j2p1czpb, child number 0
————————————-
select * from t1 , t2 where t1.col1 = t2.col1
Plan hash value: 1838229974

————————————————————————————-
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | | 10000 |00:00:00.03 | 1172 |
|* 1 | HASH JOIN | | 1 | 10000 | 10000 |00:00:00.03 | 1172 |
| 2 | TABLE ACCESS FULL| T1 | 1 | 10000 | 10000 |00:00:00.01 | 576 |
| 3 | TABLE ACCESS FULL| T2 | 1 | 10000 | 10000 |00:00:00.01 | 596 |
————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

1 – access(“T1″.”COL1″=”T2”.”COL1″)

In the above, the estimates are accurate so there is a very good chance that this is a good plan.

Here’s another, this time not so good because the estimate of rows in T1 was 1 whereas the actual was 10000.

This led the optimizer to choose an index access path over a full table scan and a NESTED LOOP rather than a HASH JOIN.
SQL_ID 9quvuvkf8tzwj, child number 0
————————————-
select /*+ cardinality(t1 1) */ * from t1 , t2 where t1.col1 =
t2.col1

Plan hash value: 931243032

——————————————————————————————————–
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
——————————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | | 10000 |00:00:00.04 | 12640 |
| 1 | NESTED LOOPS | | 1 | | 10000 |00:00:00.04 | 12640 |
| 2 | NESTED LOOPS | | 1 | 1 | 10000 |00:00:00.02 | 2640 |
| 3 | TABLE ACCESS FULL | T1 | 1 | 1 | 10000 |00:00:00.01 | 596 |
|* 4 | INDEX UNIQUE SCAN | SYS_C00446778 | 10000 | 1 | 10000 |00:00:00.01 | 2044 |
| 5 | TABLE ACCESS BY INDEX ROWID| T2 | 10000 | 1 | 10000 |00:00:00.02 | 10000 |
——————————————————————————————————–

Predicate Information (identified by operation id):
—————————————————

4 – access(“T1″.”COL1″=”T2”.”COL1″)

4.2 Do I need to worry about things like NESTED LOOPS vs HASH JOINS?

For the purposes of this exercise, no but the more knowledge the better.

Accuracy of estimates should be sufficient.

The remainder of the information should be attached to the change tool for review.

5 Appendix A: Basic guide to reading an execution plan

Using the following execution plan from a two table join:
SQL_ID 9quvuvkf8tzwj, child number 0
————————————-
select /*+ cardinality(t1 1) */ * from t1 , t2 where t1.col1 =
t2.col1

Plan hash value: 931243032

—————————————————————————————-
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time |
—————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | | 10000 |00:00:00.04 |
| 1 | NESTED LOOPS | | 1 | | 10000 |00:00:00.04 |
| 2 | NESTED LOOPS | | 1 | 1 | 10000 |00:00:00.02 |
| 3 | TABLE ACCESS FULL | T1 | 1 | 1 | 10000 |00:00:00.01 |
|* 4 | INDEX UNIQUE SCAN | SYS_C.. | 10000 | 1 | 10000 |00:00:00.01 |
| 5 | TABLE ACCESS BY INDEX ROWID| T2 | 10000 | 1 | 10000 |00:00:00.02 |
—————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

4 – access(“T1″.”COL1″=”T2”.”COL1″)

There are four key elements:
• The SQL statement
• The SQL ID – a hash value of the sql statement, usually consistent between databases and even across versions
• The execution plan
• The predicate section – not to be overlooked. Can highlight issues with implicit functions and datatype conversions amongst other things

For the execution plan itself there are a number of elements to be concerned with:
• Optimizer – all modern version of Oracle use the Cost-based Optimizer (CBO). This uses statistics and cost calculations to choose a best-cost plan for execution.

• Cost – Cost is an estimated indicator of time which the optimizer uses to compare execution plan possibilities, usually choosing the lowest cost plan. However, to all intents and purposes, developers should ignore it.

• Cardinality – An estimate of the number of rows for a particular rowsource, for a particular join, etc. Exposed in the execution plan as E-Rows for estimate and A-Rows for actuals. When comparing E-Rows to A-Rows it is important to take Starts into account, i.e to compare “Starts * E-Rows” to A-Rows. The Nested loop operations for example will have multiple starts for the inner/probed rowsource.

• Parent:child operations – An execution plan is generally a succession of parent:child operations – follow and match the indentation. A join mechanism should have two children.

• Join mechanism – A join mechanism joins two rowsources. There are a variety of mechanisms but in general there are two main methods depending on the cardinalities:

o NESTED LOOP – Essentially a FOR LOOP – For each row in the outer/driving rowsource, probe the inner/probed rowsource. Generally used for low cardinality rowsources.

o HASH JOIN – Hash all the outer/driving rowsource based on the join key(s) then hash all the inner rowsource. Generally used for high cardinality rowsources. If the cardinality estimate is too low, work area sizes used for hashing maybe too small and spill to temp space on disk – slow/unscalable

• Join order – Depending on the cardinalities, the optimizer can choose to join T1 to T2 or T2 to T1. The number of permutations for join order is N! where N is the number of tables being joined. The optimizer will limit itself to a maximum number of permutations to evaluate.

• Access path – how the data is fetched from the table, i.e. by index via various different index access mechanisms or by tablescan, etc.

• Buffers – A measure of logical IO. See below.

5.1 What happens first in the execution plan?
—————————————————————————————-
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time |
—————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | | 10000 |00:00:00.04 |
| 1 | NESTED LOOPS | | 1 | | 10000 |00:00:00.04 |
| 2 | NESTED LOOPS | | 1 | 1 | 10000 |00:00:00.02 |
| 3 | TABLE ACCESS FULL | T1 | 1 | 1 | 10000 |00:00:00.01 |
|* 4 | INDEX UNIQUE SCAN | SYS_C.. | 10000 | 1 | 10000 |00:00:00.01 |
| 5 | TABLE ACCESS BY INDEX ROWID| T2 | 10000 | 1 | 10000 |00:00:00.02 |
—————————————————————————————-
Predicate Information (identified by operation id):
—————————————————
4 – access(“T1″.”COL1″=”T2”.”COL1″)

There are a couple of exceptions but in general the execution plan starts at the first operation without a child.

So, following the indentation, the first operation without a child is:
| 3 | TABLE ACCESS FULL | T1 | 1 | 1 | 10000 |00:00:00.01 |

This is considered to be the inner/driving rowsource of the parent operation at:
| 2 | NESTED LOOPS | | 1 | 1 | 10000 |00:00:00.02 |

For each row in this rowsource, we probe the inner rowsource:
|* 4 | INDEX UNIQUE SCAN | SYS_C.. | 10000 | 1 | 10000 |00:00:00.01 |

Which is actually an index lookup on the primary key using the predicate:
4 – access(“T1″.”COL1″=”T2”.”COL1″)

The data produced by this join is then used in the parent operation:
| 1 | NESTED LOOPS | | 1 | | 10000 |00:00:00.04 |

Which uses the rowids from the unique index/primary key for table T2 to get the actual table data from T2:
| 5 | TABLE ACCESS BY INDEX ROWID| T2 | 10000 | 1 | 10000 |00:00:00.02 |

5.2 Red flags?

• Row estimates of 1:
o The minimum row estimate is 1 and in some cases this actually means 0.
o If this is not a primary key access and there really isn’t 0/1, then are there any statistics for this object?
o Row estimates of 0/1 where the actual number of rows is significantly more than 1 can cause significant performance problems

• MERGE JOIN CARTESIAN + BUFFER SORT – particularly where the estimate is 1. Can be particularly detrimental if the actual rows are greater than 1. Rarely a good operation but can be symptomatic of a missing join.

• Implicit datatype conversions

• Nested loop operations where the inner/probed table/rowsource is a FULL segment scan.

• VIEW operations – symptomatic of a non-mergeable view which may or may not be a problem

• FILTER operations where the row-by-row operation is significant

5.3 Is there anything else to look out for?

Yes, that buffers column is a measure of logical IO.

When comparing different ways of doing things, when tuning SQL, one of the key measures that should be targeted is a reduction in logical IO.

If one approach uses significantly less logical IO compared to another approach then that is significant. The statement with the lower IO is likely to be better, is more likely to benefit from having more of the data it’s interested in cached and is less likely to impact other queries and the caching of other data.

There should probably be a rule of thumb about the ratio of logical IO to rows fetched. The difficulty is picking the right indicators.

If a query selects 100 rows from 100 million buffer gets and those all-important estimates are reasonably accurate, this should be a strong signal that perhaps the indexes are not optimal for that particular query.

As a rule of thumb, a ratio of a couple of consistent gets or less per row is damn good. 100,000s or millions may well be an indicator of significant inefficiencies.

But, as always, it depends.

It also significantly depends on whether the query itself is fast enough for the business requirement and whether it has the potential to impact other users of the database.

Furthermore, one lone query is unlikely to justify a new index but that is beyond the scope of this guide.

5.4 Further Reading

A 10053 Trace provides a very detailed walkthrough of the optimizer’s process of coming up with the execution plan. Not for the faint-hearted.
http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-explain-the-explain-plan-052011-393674.pdf
Jonathan Lewis: Cost-based Oracle Fundamentals: http://www.apress.com/9781590596364

6 Appendix B Interacting with the database
The Oracle database is a very powerful piece of software.

It’s also likely to be one of the most expensive pieces of software in an application’s tech stack.

The keys to a performant Oracle database also differ significantly from other vendors.

How you do best approach something in Sybase or SQL Server is not necessarily how you should do something in Oracle.

One classic example is the use of temporary tables.

Developers should know how to get the best out of a particular database.

To treat it like a bit bucket or a “slow, dumb backup datastore” is to waste money and resources.

6.1 Vendor-specific Database features

Application developers should not be overly wary of using a feature particular to the Oracle database. Some tools can make it difficult to use vendor-specific features or optimizations but an investment in time and effort to do so can reap significant performance benefits.

Whilst this attitude might be relevant for third-party product developers who have to write software that can be installed on different database vendors, this is largely not true of enterprises writing internal software systems.

It is unlikely that the Oracle database on a particular system will be replaced by another vendor database.

It is far more likely that a Java component interacting with the database will eventually be replaced by a C# component or that the usage of the Oracle database will be deprecated in favour of caching and NOSQL technologies, so if you’re going to use SQL, use Oracle-specific features where they offer benefit.

6.2 Round-tripping

The default fetchsize for JDBC and for SQL*Plus is 10. The default is almost never appropriate for general usage as many SQL statements can be expected to fetch significantly more than 10 rows and therefore significant gains can be made by increasing this setting beyond the default.

The issue is not only about roundtrips across the network, it’s also related to the logical IO that a query needs to do. If you ask for just 10 rows, the database will do all the IO it needs to do to fetch the first ten rows. When you ask for the next 10 rows, the server process on the database might well have to do a logical read of some of the same blocks as the previous fetch which can lead to significant inefficiencies compared to a larger fetchsize.

6.3 Abstraction & Interfacing with the database

Abstraction is a principle that is put on a pedestal in the middle tier and yet often abandoned when interacting with the database.

Put simply if SQL is embedded in Java code then this introduces unnecessary dependencies on the database model and limits the ability to make subtle changes to the SQL or to the model without making a change to the application server code and doing an app server release.

Views, procedures and packages can all provide an interface to the database and the data model.

6.4 It’s all about the data.

Interacting with data appropriately, regardless of database vendor, is crucial.

Think in Sets.

Also consider the success of Engineered Systems like Oracle Exadata.

One of the things that Exadata focuses on, for example, is being able to eliminate redundant data as early as possible.

This means that the logic in the storage cells can eliminate the data before it even gets to the traditional database memory, before it goes anywhere near the network, long before it goes up to the application.

And it can do this with significant degrees of parallelism, usually with far more efficiency than similar processing in application threads.

Why is this relevant?

Eliminate early.

Let the database do the work it was designed to do.

Applications should let the database give them the smallest set of data that they need and should not bring excessive amounts of data into the middle tier for elimination and aggregation there.

Volumes of data are exploding. The best chances of scaling efficiently to deal with these volumes of data are to interact with the data appropriately.

Helzberg Diamonds Empowers Associates to Create Meaningful Customer Experiences with Oracle Retail

Oracle Press Releases - Wed, 2018-05-09 07:00
Press Release
Helzberg Diamonds Empowers Associates to Create Meaningful Customer Experiences with Oracle Retail Holistic Infrastructure Upgrade Provides Store Associates with New Tools and Insights

Redwood Shores, Calif.—May 9, 2018

Today Oracle announced that Helzberg Diamonds, a member of the Berkshire Hathaway group of companies, has completed a significant upgrade to its Oracle Retail Xstore Point-of-Service solution while implementing Oracle Retail Order Broker and Oracle Retail Customer Engagement with the support of BTM Global. The upgrade has modernized Helzberg’s infrastructure and allows them to shift their focus to strategies that better support store operations and deeper customer interactions. The latest version of Oracle Retail Xstore Point-of-Service boasts an improved integration to Oracle Retail Order Broker and Oracle Retail Customer Engagement which together support revenue growth across more than 200 Helzberg stores throughout the United States.

“The Oracle solutions enable our store associates to consult with shoppers throughout the store. The vision is to make it easier to complete transactions and allow customers to interact with us on their terms,” said Jeff Rohr, Chief Information Officer, Helzberg Diamonds. “The Oracle Retail Xstore Point-of-Service interface is intuitive for store associates helping to provide a seamless sales experience for customers, which is a priority for us.”

Helzberg chose BTM Global, a Gold-level member of the Oracle Partner Network and premier provider of retail system integration and development services. BTM Global provides transition, life-cycle support, and payment integration and upgrade services. Additionally, Helzberg continues to integrate Oracle Retail technology with third party solutions to enhance a unified shopping experience across their brand.

“Oracle Retail Xstore serves as an Omnichannel platform that empowers retailers, like Helzberg, to grow with a better understanding of the customers they serve with a key eye toward inventory visibility,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail. “Oracle values the long-standing partnership between Helzberg and MICROS. We look forward to building upon that relationship to continue to deliver innovation.”

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Helzberg Diamonds

Helzberg DiamondsÒ, a retail and online jewelry store focused on customer service, was founded in 1915 and has more than 200 stores nationwide, featuring a wide selection of fine jewelry, including diamond engagement rings and wedding rings, precious gems and watches. Helzberg Diamonds takes pride in its history of offering exceptional value, exclusive designs and timeless jewelry. Helzberg Diamonds is based in North Kansas City, Mo., and is a subsidiary of Berkshire Hathaway Inc. (NYSE symbols BRK/A and BRK/B). For the locations nearest you, call 1-800-HELZBERG (800-435-9237) or visit helzberg.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Talk to a Press Contact

Matt Torres

  • 4155951584

Oracle’s Moat Selected by LinkedIn as Video Viewability Partner

Oracle Press Releases - Wed, 2018-05-09 06:00
Press Release
Oracle’s Moat Selected by LinkedIn as Video Viewability Partner Moat to Offer Video Viewability and Attention Measurement Across the Major US Digital Platforms

Redwood Shores, Calif.—May 9, 2018

Oracle’s Moat, a SaaS analytics measurement provider for marketers and publishers that is part of the Oracle Data Cloud, today announced it has been selected to measure viewability and attention signals on the LinkedIn platform. The collaboration will focus on LinkedIn’s video for Sponsored Content. With the integration, Moat will be the first provider to offer video viewability and attention measurement across the major US digital platforms.

“Moat measurement on LinkedIn marks a significant moment for marketers, who will have independent viewability and attention measurement in one place, on Moat, across all major US digital platforms,” said Jonah Goodhart, SVP of Oracle Data Cloud and Co-Founder of Moat. “We are proud to collaborate with LinkedIn as we share a positive vision of the future of digital marketing.”

“In today’s video advertising landscape, marketers require independent viewability insights to truly understand how their campaigns perform,” said Tomer Cohen, VP of Product, LinkedIn Marketing Solutions. “Our work with Moat will enable marketers on LinkedIn to leverage Moat’s industry-leading digital measurement platform, so they can make more informed advertising decisions.”

The collaboration will equip marketers with a deeper understanding of their campaign performance and results garnered from LinkedIn video inventory—and intends to make these services available to customers later this year. This reinforces Moat’s vision of providing widespread trusted and independent measurement in order to empower brands to make smarter media and creative decisions.

Contact Info
Simon Jones
Oracle
650.506.0325
s.jones@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Simon Jones

  • 650.506.0325

Oracle Ksplice and Oracle Linux reminder

Wim Coekaerts - Tue, 2018-05-08 22:37

For those of you that keep up with my blog and twitter musings... you know how much I love Ksplice. This morning I was connecting to one of my cloud VMs and did an uptrack-upgrade as it had been a while and I hadn't turned on automatic ksplice updates on this node. I was pleasantly reminded of the awesomeness that is Ksplice. 

Here's the output, a kernel from 2-MAR-2018, no reboot, just a quick # uptrack-upgrade and look at all the stuff that I am now protected against. A few seconds, no impact on apps, done. Now I know that there are some other projects out there that talk about being able to patch something here or there. But nothing comes even close to this. Not in terms of service, not in terms of patch complexity, not in terms of easy of use, etc, etc etc.

Remember, everyone using Oracle Linux in Oracle Cloud has full use of ksplice included at no extra cost and no extra configuration, every Oracle Linux instance is configured out of the box to use this. 

No other cloud provider has this service for their OSs. No other OS vendor provides this as a service for their own product at this level of sophistication and certainly not in any cloud environment. Best place to run Linux, best place to run Oracle Linux, all integrated, inclusive ... in Oracle Cloud Infrastructure.. Yes this is/sounds like marketing but.. fact is, it works and it's there.

[root@vm1-phx opc]# uname -a Linux vm1-phx 4.1.12-112.16.4.el7uek.x86_64 #2 SMP Mon Mar 12 23:57:12 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux [root@vm1-phx opc]# uptrack-upgrade The following steps will be taken: Install [q0j0yb6c] KAISER/KPTI enablement for Ksplice. Install [afoeymft] Improve the interface to freeze tasks. Install [bohqh05m] CVE-2017-17052: Denial-of-service due to incorrect reference counting in fork. Install [eo2kqthd] Weakness when checking the keys in the XTS crypto algorithm. Install [nq1xhhj5] CVE-2018-7492: Denial-of-service when setting options for RDS over Infiniband socket. Install [b1gg8wsq] CVE-2017-7518: Privilege escalation in KVM emulation subsystem. Install [lzckru19] Information leak when setting crypto key using RNG algorithm. Install [npbx6wcr] Deadlock while queuing messages before remote node is up using RDS protocol. Install [4fmvm11y] NULL pointer dereference when using bind system call on RDS over Infiniband socket. Install [3eilpxc9] CVE-2017-14051: Denial-of-service in qla2xxx sysfs handler. Install [385b9ve0] Denial-of-service in SCSI Lower Level Drivers (LLD) infrastructure. Install [aaaqchtz] Denial-of-service when creating session in QLogic HBA Driver. Install [d0apeo6x] CVE-2017-16646: Denial-of-service when using DiBcom DiB0700 USB DVB devices. Install [5vzbq8ct] CVE-2017-15537: Information disclosure in FPU restoration after signal. Install [6qv3bfyi] Kernel panic in HyperV guest-to-host transport. Install [35rms9ga] Memory leak when closing VMware VMXNET3 ethernet device. Install [5gdk22so] Memory corruption in IP packet redirection. Install [6m4jnrwq] NULL pointer dereference in Hyper-V transport driver on allocation failure. Install [owihyva9] CVE-2018-1068: Privilege escalation in bridging interface. Install [buc7tc4q] Data-loss when writing to XFS filesystem. Install [kef372kx] Denial-of-service when following symlink in ext4 filesystem. Install [hb1vibbw] Denial-of-service during NFS server migration. Install [4cqic4y6] Denial-of-service during RDS socket operation. Install [4av6l7rd] Denial-of-service when querying ethernet statistics. Install [8irqvffd] Denial-of-service in Hyper-V utilities driver. Install [5ey3jcat] Denial-of-service in Broadcom NetXtreme-C/E network adapter. Install [npapntll] Denial-of-service when configuring SR-IOV virtual function. Install [s9mkcqwb] NULL pointer dereference during hardware reconfiguration in Cisco VIC Ethernet NIC driver. Install [470l2f6x] Kernel panic during asynchronous event registration in LSI Logic MegaRAID SAS driver. Install [cb7q8ihy] Kernel crash during PCI hotplug of Emulex LightPulse FibreChannel driver. Install [tztxs6wf] Kernel crash during Emulex LightPulse FibreChannel I/O. Install [o7drldhw] NULL pointer dereference during Emulex LightPulse FibreChannel removal. Install [t8a1epky] Hard lockup in Emulex LightPulse FibreChannel driver. Install [8du7f5q4] Deadlock during abort command in QLogic QLA2XXX driver. Install [rghn5nkz] Kernel crash when creating RDS-over-IPv6 sockets. Install [taix4vnz] CVE-2017-12146: Privilege escalation using a sysfs entry from platform driver. Install [60u6sewd] CVE-2017-17558: Buffer overrun in USB core via integer overflow. Install [2a1t0wfk] CVE-2017-16643: Out-of-bounds access in GTCO CalComp/InterWrite USB tablet HID parsing. Install [tcxwzxmf] CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Install [3qhfzsex] CVE-2018-1000199: Denial-of-service in hardware breakpoints. Go ahead [y/N]? y Installing [q0j0yb6c] KAISER/KPTI enablement for Ksplice. Installing [afoeymft] Improve the interface to freeze tasks. Installing [bohqh05m] CVE-2017-17052: Denial-of-service due to incorrect reference counting in fork. Installing [eo2kqthd] Weakness when checking the keys in the XTS crypto algorithm. Installing [nq1xhhj5] CVE-2018-7492: Denial-of-service when setting options for RDS over Infiniband socket. Installing [b1gg8wsq] CVE-2017-7518: Privilege escalation in KVM emulation subsystem. Installing [lzckru19] Information leak when setting crypto key using RNG algorithm. Installing [npbx6wcr] Deadlock while queuing messages before remote node is up using RDS protocol. Installing [4fmvm11y] NULL pointer dereference when using bind system call on RDS over Infiniband socket. Installing [3eilpxc9] CVE-2017-14051: Denial-of-service in qla2xxx sysfs handler. Installing [385b9ve0] Denial-of-service in SCSI Lower Level Drivers (LLD) infrastructure. Installing [aaaqchtz] Denial-of-service when creating session in QLogic HBA Driver. Installing [d0apeo6x] CVE-2017-16646: Denial-of-service when using DiBcom DiB0700 USB DVB devices. Installing [5vzbq8ct] CVE-2017-15537: Information disclosure in FPU restoration after signal. Installing [6qv3bfyi] Kernel panic in HyperV guest-to-host transport. Installing [35rms9ga] Memory leak when closing VMware VMXNET3 ethernet device. Installing [5gdk22so] Memory corruption in IP packet redirection. Installing [6m4jnrwq] NULL pointer dereference in Hyper-V transport driver on allocation failure. Installing [owihyva9] CVE-2018-1068: Privilege escalation in bridging interface. Installing [buc7tc4q] Data-loss when writing to XFS filesystem. Installing [kef372kx] Denial-of-service when following symlink in ext4 filesystem. Installing [hb1vibbw] Denial-of-service during NFS server migration. Installing [4cqic4y6] Denial-of-service during RDS socket operation. Installing [4av6l7rd] Denial-of-service when querying ethernet statistics. Installing [8irqvffd] Denial-of-service in Hyper-V utilities driver. Installing [5ey3jcat] Denial-of-service in Broadcom NetXtreme-C/E network adapter. Installing [npapntll] Denial-of-service when configuring SR-IOV virtual function. Installing [s9mkcqwb] NULL pointer dereference during hardware reconfiguration in Cisco VIC Ethernet NIC driver. Installing [470l2f6x] Kernel panic during asynchronous event registration in LSI Logic MegaRAID SAS driver. Installing [cb7q8ihy] Kernel crash during PCI hotplug of Emulex LightPulse FibreChannel driver. Installing [tztxs6wf] Kernel crash during Emulex LightPulse FibreChannel I/O. Installing [o7drldhw] NULL pointer dereference during Emulex LightPulse FibreChannel removal. Installing [t8a1epky] Hard lockup in Emulex LightPulse FibreChannel driver. Installing [8du7f5q4] Deadlock during abort command in QLogic QLA2XXX driver. Installing [rghn5nkz] Kernel crash when creating RDS-over-IPv6 sockets. Installing [taix4vnz] CVE-2017-12146: Privilege escalation using a sysfs entry from platform driver. Installing [60u6sewd] CVE-2017-17558: Buffer overrun in USB core via integer overflow. Installing [2a1t0wfk] CVE-2017-16643: Out-of-bounds access in GTCO CalComp/InterWrite USB tablet HID parsing. Installing [tcxwzxmf] CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Installing [3qhfzsex] CVE-2018-1000199: Denial-of-service in hardware breakpoints. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.14.3.el7uek

Is APEX Suitable for an Enterprise Setting?

Joel Kallman - Tue, 2018-05-08 21:57
The APEX 18.1 release has significant new capabilities to consume a variety of remote data sources, from ordinary REST data feeds to ORDS-based Remote SQL.  Up until APEX 18.1, database links were the predominant way to access remote data sources, and of course, database links don't exist in the cloud.  Improvement in this area has been a core focus of ours for APEX 18.1.

A long-time Oracle tools analyst and consultant recently published a backhanded compliment to APEX.  In a blog post, he said:
"Among Oracle tools, APEX has been the old-school, monolithic holdout, together with Oracle Forms. Much modern application architecture is based on REST web services, and other Oracle tools like JET, VBCS and ADF have long had the ability to consume and/or produce REST web services."Before I go on, let's correct a few points.  Firstly, APEX has long had the ability to produce REST and consume both REST and SOAP Web Services for years.  I know, because I authored the first support for SOAP Web Services for APEX in 2002.  Also, you can't produce REST with JET.  It's a toolkit.  There is no back-end data store, no ability to "host" a REST Service.  The JET product managers themselves use RESTful Services from apex.oracle.com when doing their demonstrations of JET!  Lastly, Oracle JET was released in October 2015 and ABCS (now VBCS) was first announced in June 2015.  If that constitutes "long had the ability", then so be it.

So back to the statements - old-school, monolithic holdout.  Not modern.  In response to Morten Braten (a luminary in the APEX community), this consultant replied that "monoliths are rarely a good choice in an enterprise setting."  In response to my request for a definition of "enterprise setting", the consultant kindly authored a blog post stating why monolithic tools are bad for the enterprise.

One of his arguments against an APEX architecture is that "data must be committed to the database before it can be seen by anybody else", which I think is an odd conclusion to reach.  Last time I checked, most business applications deal with data.  And 30 years from now, the interfaces and access methods to your data will change 10 times, but you will still have...your data.  As long-time APEX expert Billy Verreynne ranted in 2005, "What does any business application deal with? DATA! That is the core. That is what drives the business. Applications come and go. Data is forever. Where does the data live? In the database. The database is the core. The database has been that since the 80's. Is still that. Focus on the core. Design for the core. Leverage the core."

I often tell people that the intersection point with APEX and many other technologies is the Oracle database - it's a wonderfully rich, very capable database and application development environment.  It's an engine with interfaces, just like the many boxes this consultant showed in his enterprise architecture diagrams.  Concurrency, transactional integrity, durability - these problems were solved in the Oracle database many years ago.  And as a bonus, you get zero latency data access for free!  Committing data to the database before it can be seen by anybody else should be considered a feature and not a deficiency.

Back to the term "enterprise setting", every enterprise, large and small, has a variety of application needs, from tactical to enterprise.  You could consider it on a scale like this:


At the bottom of the scale are completely simplistic, tactical applications.  These would be very easy to build, low in complexity, developed by one or two people, and often with a finite lifespan.  These are often opportunistic applications.  At the opposite end are enterprise applications.  These have large teams (10, 20 or more developers), a project manager, a dedicated budget, are high in complexity (and cost), and are truly mission-critical to the enterprise.

On this scale, where would APEX be an appropriate fit for a certain class of applications?  This is where I believe this consultant and I differ.  I believe APEX is ideal for the bottom 90% of this scale.  Sure, APEX can be and is used by customers for large ERP, HR and CRM systems serving thousands of end users, but the sweet spot for APEX is in the bottom 90% of this application scale.


Every enterprise has "gaps" in their corporate systems.  Oracle, "the information management company ©" has gaps.  I see it every day.  No corporate system or enterprise system can solve all problems for all business needs.  And the question is, how will you solve those problems, or will they remain unsolved?  Corporate architects prefer to have a blessed, supportable technology stack, but that stack is often times unapproachable to most developers.  Why do you think Excel has proliferated in the enterprise and continues to do so today?

The enterprise architecture that this consultant espouses is most likely perfect for legitimate enterprise applications.  But at what point on the scale is that architecture and associated technology stack unnecessarily complex or too costly for more tactical applications?  How many truly enterprise applications are there in an enterprise versus non-enterprise?  10 or 20 or 30 enterprise applications, versus hundreds if not thousands of non-enterprise applications?  I'll gladly pitch the benefits of APEX to solve the bottom 90% of this scale and the thousands of application needs which every large enterprise has.

At Oracle, I see this bottom 90% being solved with APEX every day, from applications which track hardware allocation & utilization to applications designed to manage the collateral associated with blockchain use cases to applications for submitting questions to the payroll team - the "bottom" 90% is very large, and the question is, how will you solve them?  With paper?  With a spreadsheet?  Or with a proven, scalable, low-code framework on the Oracle database that takes care of all of the important aspects of Web app development and lets you focus on the business problem to be solved?  That, my friends, is APEX.

Some ideas about Oracle Database on Docker

Yann Neuhaus - Tue, 2018-05-08 15:55

This is not a ‘best practice’ but just some ideas about building Docker images to provide an Oracle Database. I started with the images provided by Oracle: https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance and this is great to validate the docker environment. Then I customized for my needs and here are the different points about this customization.

Do not send a huge context at each build attempt

I work by iteration. Sending a 3GB context each time I try a build is a no-go for me. Then I quickly stopped to put the Oracle installation .zip in the context of my build. I already blogged about this.

There are several ways to avoid to send a big context, such as having the .zip in an NFS or HTTP server and ADD it or RUN wget from there. I prefer to build one small container with this .zip that I’ll use later

In my current directory I have the linuxx64_12201_database.zip which I explicitly send to the context with this .dockerignore:

*
!linuxx64_12201_database.zip

And I build a franck/oracle122/zip image with it:

FROM oraclelinux:7-slim
ADD linuxx64_12201_database.zip /var/tmp

When done, I’ll not have to send the context again and I will build my container from this one with another Dockerfile:

FROM franck/oracle122/zip
RUN yum -y install oracle-database-server-12cR2-preinstall unzip tar wget openssl vi && rm -rf /var/cache/yum

Do not yum install at each build attempt

In the same idea, I build another intermediate image with the yum install above. The reason is that once I have it, I don’t need internet access anymore. I did that before boarding for an 8 hours flight. I build the above Dockerfile as franck/oracle122/prereq while on airport wifi and will use it later as the base for the final Dockerfile:

.dockerignore:

*

Dockerfile:

FROM franck/oracle122/prereq
# then follow all the work which do not need large context or internet connection
...

Even if you are not on a plane, it is always good to avoid internet access. You probably had to get some doors opened in the firewall in order to pull the base image. Now that you have it, you should keep it. Or one day, the security team will close the door again and you will waste a few hours. That also means that you do not start with a :latest image but with a specific version.

Do the long steps first

The Dockerfile provided by Oracle starts with all ENV and a COPY to add all scripts into the container. The problem is that each time you want to change a script, the build has to start from this step. And then the long operations have to be done again: unzip, install,…

I have a small context here (only the scripts and configuration files) but I ADD or COPY them only when needed. For example, here, a modification in install.rsp will re-do the runInstaller step, but the unzip one will not have to be done again because the cache is re-used:

WORKDIR /var/tmp
RUN unzip linuxx64_12201_database.zip
COPY install.rsp /var/tmp
RUN ./database/runInstaller -silent -force -waitforcompletion -responsefile /var/tmp/install.rsp -ignoresysprereqs -ignoreprereq ; true

The script that will run the container is added only at the end so that I can modify and re-build quickly without re-doing the previous steps.

VOLUME ["/opt/oracle/pdbs"] EXPOSE 1521 5500
COPY docker_cmd.sh /opt/oracle
CMD exec /opt/oracle/docker_cmd.sh ${CDB} ${PDB

Another step that I do at the end is removing the files I do not need in the container. Because that’s a guess and try approach and I want to build quickly. Of course, this may not be optimized for the size of all those layers, but I can reduce the final image later. The main feature of Docker build are the layers and I use them to develop the Dockerfile without wasting my time. For the waste of storage, I use ZFS with block level Cow, dedup and compression. For the final image, I’ll –squash it.

Remove all unnecessary files

The detail will probably go into a future blog post. But, as soon as runInstaller is done, and latest bundle patch applied, you can remove a lot of directories that I do not need anymore:

rm -rf $ORACLE_HOME/inventory $ORACLE_HOME/.patch_storage

As soon as the database has been created with DBCA, I do not need the DBCA templates anymore:

rm -rf $ORACLE_HOME/assistants

As this container will run only the instance, I can remove:

rm -rf $ORACLE_HOME/sqldeveloper $ORACLE_HOME/suptools $ORACLE_HOME/jdk

And depending on the options I will provide in the database, I remove the big ones:

rm -rf $ORACLE_HOME/apex $ORACLE_HOME/javavm $ORACLE_HOME/md

There is also a lot to remove from $ORACLE_HOME/lib (I need only a few *.so* that I can determine with strace, perf, lsof, ldd) and from $ORACLE_HOME/bin (basically, I need oracle, tnslsnr, lsnrctl, and sqlplus). Those are executables and you can strip them to reduce the size further. Definitely remove the last relink ones renamed as oracleO, …

Those are just examples, your list will depend on your version and usage, but this may reduce the image to 1GB or less. Of course, this is not supported. But the goal is to provide a small development database. Not an reliable and efficient one for production.

Use ZFS for the storage driver

An Oracle Database is full of large files that are updated sparsely. Just forget about OVERLAY and OVERLAY2 which copies the whole file to the new layer when you update a single byte of a file. I do not consider BTRFS seriously. In my opinion, ZFS is the only filesystem to consider for storing Docker images with large files. Enforce deduplication and compression to overcome the inflation of layering and the ignorance of sparse files. I think that recordsize=32k is a good idea from what I’ve seen about how docker applies writes to layers. More detail in a future blog post.

Note that layering issues are not only for build efficiency but also for container run. You will see that I put some datafiles in the image. Then, at database open, some blocks are changed (at least the headers) and I do not want a full file copy to the runnable layer. Block level CoW is required for that.

Create the CDB in the container

The container is the place to store all the software, and most of CDB$ROOT and PDB$SEED is part of the software distribution. This is what takes time when creating a database (catalog, catproc,…) and I definitely refuse to give a container to a developer where he will have to wait 10 minutes at run because the database has to be created on the external volume. A ‘docker run’ must be fast. And the external volume must contain only the data that has to be persisted, not 500MB of dbms_% package code, which will be all the same for all containers from the same image.

This means that I create the CDB during the build:

RUN /opt/oracle/product/12.2.0.1/dbhome_1/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName ${CDB} -sid ${CDB} -initParams db_unique_name=${CDB},service_names=${CDB},shared_pool_size=600M,local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))' -createAsContainerDatabase true -numberOfPdbs 0 -sysPassword oracle -systemPassword oracle -emConfiguration NONE -datafileDestination /opt/oracle/oradata -recoveryAreaDestination /opt/oracle/fast_recovery_area -recoveryAreaSize 1024 -storageType FS -sampleSchema false -automaticMemoryManagement false -totalMemory 1024 -databaseType OLTP -enableArchive false -redoLogFileSize 10 -useLocalUndoForPDBs true -createListener LISTENER:1521 -useOMF true -dbOptions OMS:false,JSERVER:false,SPATIAL:false,IMEDIA:false,ORACLE_TEXT:false,SAMPLE_SCHEMA:false,CWMLITE:false,DV:false,APEX:false
RUN rm -rf /opt/oracle/product/12.2.0.1/dbhome_1/assistants/dbca/*

No PDB here, as the PDB will be created at ‘docker run’ time into the external volume. I use a template with datafiles here, but I may prefer to run the whole creation to control the creation. I may even hack some .bsq and .sql files in RDBMS admin to reduce the size. I’m in archivelog mode here because I want to allow to flashback the PDB. The container is ephemeral. If it becomes too large (archive logs, audit, …) just remove it and start another one. Or add a script to remove the old ones (those not required by guarantee restore points).

Create the PDB in the external volume

The PDB is the only thing that must be persistent, and controlled by the developer. I create it with the following in my docker_cmd.sh (which is called from the Dockerfile CMD line providing CDB name and PDB name as arguments) script:

create pluggable database $2 admin user admin identified by oracle create_file_dest='/opt/oracle/pdbs';
alter pluggable database $2 open;
show pdbs

The PDB is bound to the external volume ( VOLUME [“/opt/oracle/pdbs”] ) thanks to 12.2 CREATE_FILE_DEST clause so that the developer can create datafiles only there. Then the ‘docker run’ is fast as a clone of PDB$SEED.

The developer will connect only to the PDB. He has nothing to do in CDB$ROOT. If there is a need to change something in CDB$ROOT, I’ll provide a new image. I may even define lockdown profiles rules to limit the PDB and define a listener where only the PDB registers.

Unplug the PDB at container stop

When the developer stops the container, I want to leave something consistent in the external volume. The way to do that quickly is a PDB unplug. An unplug to a PDB archive (a .pdb zip with all datafiles) would be nicer, but that takes too much time to create. I unplug to a .xml file. This is what I do on stop (SIGTERM and SIGSTOP):

alter pluggable database all close;
column pdb_name format a30
select pdb_id,pdb_name,status from dba_pdbs;
begin
for c in (select pdb_name from dba_pdbs where pdb_id>2) loop
dbms_output.put_line('-- Unpluging '||c.pdb_name);
execute immediate 'alter pluggable database "'||c.pdb_name||'" unplug into ''/opt/oracle/pdbs/'||c.pdb_name||'.xml''';
end loop;
for c in (select pdb_name from dba_pdbs where pdb_id>2 and status='UNPLUGGED') loop
dbms_output.put_line('-- Dropping '||c.pdb_name);
execute immediate 'drop pluggable database "'||c.pdb_name||'" keep datafiles';
end loop;
end;
/
-- All pluggable databases have been unplugged:
host ls /opt/oracle/pdbs/*.xml
-- Shutdown the CDB:
shutdown immediate;
-- You can plug those PDBs by passing the volume to a new container

The script iterates on all PDBs but I see no reason to create more than one. I unplug the PDB and drop it, and then shutdown the instance. We need the unplug to be completed before the stop timeout. The container may be killed before the drop or shutdown, but as long as we have the .xml we can plug the PDB into a new container.

Plug the PDB at container re-start

I mentioned earlier that at start I create the pluggable database mentioned by ${PDB}. But this is only when there is no /opt/oracle/pdbs/${PDB}.xml
If this file is found, this means that we provide a PDB that was unplugged by a previous container stop.
Actually, when the start detects this file, the following will be run:

whenever sqlerror exit failure;
create pluggable database "${PDB}" using '/opt/oracle/pdbs/${PDB}.xml';
host rm /opt/oracle/pdbs/${PDB}.xml
alter pluggable database "${PDB}" open;
select message from pdb_plug_in_violations;

Finally, because I may start a container which has a newer Release Update than the one which unplugged my PDB, I run:

$ORACLE_HOME/OPatch/datapatch

One PDB per container

My scripts process all PDBs but I think that in most cases we need to have a one-to-one relationship between the container and the PDB. The idea is to provide a database that is ready to use and where no administration/troubleshooting is required. The key here is to keep it simple. If you need to provide a large CDB with several PDBs, then Docker is not the solution to your problem. A virtual machine is a better answer for that.

SPfile? Password file?

The image build provided by Oracle stores the persistent configuration files with the database, in the external volume, through symbolic links from ?/dbs. But with my design, I don’t need to. The configuration of the instance, running in the container, is within the container. The passwords for SYS is in the container. Then SPfile and password files stay in the container. The runnable image is not read-only. It is writeable. We can write here as long as the changes do not have to persist beyond the container end of life.

The ‘scope=spfile’ parameters that can be modified by the developer will be PDB parameters. They are persisted because they go to the .xml file at unplug. Only in case of crash, without a clean unplug, those parameters may be stored only in the container. That’s a special case. A crashed container is not dead and jsut waits to be re-started.

Crash recovery needs the same container

There’s one big flaw with my CDB-in-container/PDB-in-volume design. The whole database datafiles must be consistent, are checkpointed together, and are all protected by the same redo stream, which is located in the container. But what’s the real problem about that? If the container is cleanly stopped, the PDB is unplugged and there is a clear separation between my external volume and the container. And both are consistent.

However, if the container crashes, the datafiles in my external volume are fuzzy and need recovery. This cannot be done without the CDB files which are on the container. This has only one consequence: the user must know that if the container was not cleanly stopped, she will need to start the PDB with the same container. I don’t think this is a real problem. I just ensure that the user gets the warning (a big README file in the external volume for example, created at start and removed at clean stop) and that the container will always be able to recover (no 100% full filesystem at start – anyway I have some log cleanups at start).

Handle all errors and signals

My startup script handle 3 situations.
The first one is the first start after creation of the container. This creates the pluggable database.
The second one is the re-start after a clean stop. This plugs the existing pluggable database.
The third one is crash-recovery after a kill. This just runs the automatic instance recovery.

Then the startup script will run in a loop, either tailing the alert.log or displaying some status info every minutes.

But before all of that, the startup script must handle the termination signals.

The clean stop is handled by the following signals:

trap 'shutdown SIGINT' SIGINT
trap 'shutdown SIGTERM' SIGTERM
trap 'shutdown EXIT' 0

SIGINT is for ^C when running the container, SIGTERM is when ‘docker stop’, and the signal 0 is when the container exits by itself. This can happen when my ‘tail -f’ on alert log is killed for example. All of them call my shutdown() procedure which is trying a clean stop (unplug the PDBs).

When the stop timout is expired or when we do a ‘docker kill’, there’s no time for that. The only thing I do here before a shutdown abort is an ‘alter system checkpoint’ to try to reduce the recovery needed. And display a WARNING message saying that the container that was killed must not be removed but be re-started asap to recover the PDB in the external volume. Maybe explicitly name the container and the command to re-start.

I do that with an abort() function called by the following:

trap 'abort SIGKILL' SIGKILL

The kill -9 of the instance, or container crash, cannot be handled. Recovery is needed as for the SIGKILL one. Here is the reason for keeping a permanent README file near the PDB to explain that the container which crashed should be restarted as soon as possible to recover this.

Conclusion

This is not a recipe of how to build an Oracle Database Docker image, but just some ideas. The most important is to know the requirement. If you provide Oracle on Docker just because the developers want that, the solution will probably be wrong: too large, too long, inefficient, and too complex,… They will not use it and they will tell everybody that Oracle is not cool because it cannot be dockerized.
CaptureDockerCDBPDB
With my PDB-in-volume / CDB-in-container design, I have :

  • Docker Images with the ephemeral software, one per version (down to patches), and with different set of component installed
  • External volume (personal directory in a NFS share, or a local one) with the persistent data and settings
  • Containers running the software on the data, linking them together for the whole lifecycle of the container

Think of them as 2 USB sticks, one with the software (binaries and packages), and one with the data (user metadata and data). When plugged together on the same container, it runs one version of software with one state of data. If the container crashes, you just run it again without unplugging any of the sticks. When you are done with your test or development iteration, you stop the container and remove it. Then you have unplugged the sticks to run another combination of data and software.

 

Cet article Some ideas about Oracle Database on Docker est apparu en premier sur Blog dbi services.

Reminder: Upgrade Oracle Enterprise Manager 11 to EM 13c

Steven Chan - Tue, 2018-05-08 13:00

You can use the Application Management Pack plug-in to manage your E-Business Suite instances using Oracle Enterprise Manager.

Extended Support for Enterprise Manager Grid Control 11.1 ended on April 30, 2018. EM 11 is now in Sustaining Support. You will still have access to existing resources, but no new updates, fixes, or certifications will be produced for EM 11.

All Enterprise Manager 11 customers should upgrade to the latest EM release certified with EBS. As of today, the latest EM release certified with EBS is Enterprise Manager Cloud Control 13c.

Related Articles

Categories: APPS Blogs

Developer Cloud Service May Release Adds K8N, OCI, Code Editing and More

OTN TechBlog - Tue, 2018-05-08 11:00

Just a month after the recent release of Oracle Developer Cloud Service - that added support for pipelines, Docker, and Terraform - we are happy to announce another update to the services that adds even more option to help you extend your DevOps and CI/CD processes to support additional use cases.

Here are some highlights of the new version:

Extended build server software

You can now create build jobs and pipelines that leverage:

  • Kubernetese - use the kubectl command line to manage your docker containers
  • OCI Command line - to automate provisioning and configuration of Oracle Compute 
  • Java 9 - for your latest java projects deployments
  • Oracle Development Tools - Oracle Forms and Oracle JDeveloper 12.2.3 are now available to automate deployment of Forms and ADF apps

 

Build Server Software Options SSH Connection in Build

You can now define SSH connection as part of your build configuration to allow you to securely connect and execute shell scripts on Oracle Cloud Services.

In Browser Code Editing and Versioning 

A new "pencil" icon let's you edit code in your private git repositories hosted in Developer Cloud Service directly in your browser. Once you edited the code you can commit the changes to your branch directly providing commit messages.

Code editing in the browser

PagerDuty Webhook

Continuing our principle of keeping the environment open we add a new webhook support to allow you to send events to the popular PagerDuty solution.

Increased Reusability

We are making it easier to replicate things that already work for your team. For example, you can now create a new project based on an existing project you exported. You can copy an agile board over to a new one. If you created a useful issue search - you can share it with others in your team.

There are many other feature that will improve your daily work, have a look at the what's new in DevCS document for more information.

Happy development!

Oracle Marketing Cloud Transforms Sales Process with Powerful New Data Integrations

Oracle Press Releases - Tue, 2018-05-08 07:00
Press Release
Oracle Marketing Cloud Transforms Sales Process with Powerful New Data Integrations New Oracle Eloqua integrations with 6sense, Demandbase, LookBookHQ and Mintigo streamline and optimize sales process

REDWOOD SHORES, Calif.—May 8, 2018

Helping sales teams engage customers and accelerate and close more deals, Oracle today announced new product integrations with the Oracle Marketing Cloud. The new integrations between Oracle Eloqua and 6sense, Demandbase, LookBookHQ and Mintigo deliver powerful data-driven insights that empower sales teams to identify high-potential buyers, drive engagement and close deals faster.

A new integration between Oracle Eloqua and 6sense helps sales teams increase pipeline conversion. The product integration enables Oracle Eloqua customers to leverage AI-powered in-market predictions from the 6sense Demand Orchestration Platform and achieve a unified view of account activity that includes the prospect’s anonymous website intent, competitor research and other relevant need-based intent.

“Understanding as much as possible of an account’s behavior provides a competitive edge to sales organizations to convert pipeline to revenue,” said Amar Doshi, VP of Product, 6sense. “Many of our customers that use Oracle’s Eloqua Profiler and rely on 6sense AI have asked for an integrated experience to support their account-based marketing and sales efforts. We’re excited to be a part of this customer-driven initiative for B2B companies.”

A new integration between Oracle Eloqua and Demandbase helps sales teams quickly and easily access the insights needed to accelerate and close deals. The new product integration improves the sales process by delivering AI-enabled intent data and insights from Demandbase’s Conversion Solution within Oracle Eloqua so that sales teams can contact the right accounts with the right messages at the right time to move deals forward.  

“Marketers need to work hand-in-hand with sales teams across the entire buying cycle,” said Dom Lindars, VP of product, Demandbase. “We’re excited to be part of the launch for Oracle’s Sales Tools Extensions with our Conversion Solution, which will allow sales teams to truly understand the behavior and needs of their target accounts and increase their close rates.”

A new integration between Oracle Eloqua and LookBookHQ gives sales teams rich, actionable insight into buyer education, based on how leads and accounts are engaging with content. The new product integration provides a full summary of content engagement "after the click" (assets viewed in session, time spent, account rollup) within a contact’s profile in Oracle Eloqua and allows for the exploration of other contacts at the same account.

“LookBookHQ's ability to show the depth and quality of leads' engagement, coupled with the ability to see total engagement at account level over time, equips sales with a powerful extension to Oracle Eloqua Profiler,” says Stephen Streich, VP of product and engineering, LookBookHQ. “And this in turn provides better alignment between sales and marketing teams on buyer readiness and content performance.”

A new integration between Oracle Eloqua Profiler and Mintigo enables sales teams to learn, discover and engage their prospective buyers in personalized ways at scale. The new product seamlessly integrates Mintigo’s AI powered prospecting application, Sales Coach 360, with Oracle Eloqua Sales Tools.

“Prospecting is hard and the current CRM solutions do very little to enable sellers to engage their prospective buyers intelligently,” said Atul Kumar, chief product officer, Mintigo. “Understanding who is in-market to buy, why, when and how you should engage them is critical in achieving positive outcomes. This is what Oracle Eloqua Profiler and Mintigo’s Sales Coach 360 delivers to sales.”

Oracle Marketing Cloud is part of Oracle Customer Experience (CX) Cloud Suite, which empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences and outcomes, Oracle CX Cloud Suite helps customers reduce IT complexity, deliver innovative customer experiences and achieve predictable and tangible business results.

For additional information about Oracle CX, follow @OracleCX on Twitter, LinkedIn and Facebook or visit SmarterCX.com.

Contact Info
Kimberly Guillon
Oracle
209.601.9152
kim.guillon@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kimberly Guillon

  • 209.601.9152

20 Indexes

Jonathan Lewis - Tue, 2018-05-08 06:53

If your system had to do a lot of distributed queries there’s a limit on indexes that might affect performance: when deriving an execution plan for a distributed query the optimizer will consider a maximum of twenty indexes on each remote table. if you have any tables with a ridiculous number of indexes (various 3rd party accounting and CRM systems spring to mind) and if you drop and recreate indexes on those tables in the wrong order then execution plans may change for the simple reason that the optimizer is considering a different subset of the available indexes.

Although the limit is stated in the manuals (a few lines into a section on managing statement transparency) there is no indication about which 20 indexes the optimizer is likely to choose – a couple of experiments, with tracing enabled and shared pool flushes, gives a fairly strong indication that it’s the last 20 indexes created (or, to be more explicit, the ones with the 20 highest object_id values).

Here’s a little code to help demonstrate the point – first just the table and index creation


rem
rem	Script:		indexes_20.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Apr 2008
rem
rem	Last tested 
rem		12.2.0.1
rem

create table t1
as
with generator as (
	select	--+ materialize
		rownum 	id
	from	all_objects 
	where	rownum <= 3000 -- > comment to avoid WordPress format issue
)
select
	mod(rownum,trunc(5000/1))	n01,
	mod(rownum,trunc(5000/2))	n02,
	mod(rownum,trunc(5000/3))	n03,
	mod(rownum,trunc(5000/4))	n04,
	mod(rownum,trunc(5000/5))	n05,
	mod(rownum,trunc(5000/6))	n06,
	mod(rownum,trunc(5000/7))	n07,
	mod(rownum,trunc(5000/8))	n08,
	mod(rownum,trunc(5000/9))	n09,
	mod(rownum,trunc(5000/10))	n10,
	mod(rownum,trunc(5000/11))	n11,
	mod(rownum,trunc(5000/12))	n12,
	mod(rownum,trunc(5000/13))	n13,
	mod(rownum,trunc(5000/14))	n14,
	mod(rownum,trunc(5000/15))	n15,
	mod(rownum,trunc(5000/16))	n16,
	mod(rownum,trunc(5000/17))	n17,
	mod(rownum,trunc(5000/18))	n18,
	mod(rownum,trunc(5000/19))	n19,
	mod(rownum,trunc(5000/20))	n20,
	mod(rownum,trunc(5000/21))	n21,
	mod(rownum,trunc(5000/22))	n22,
	mod(rownum,trunc(5000/23))	n23,
	mod(rownum,trunc(5000/24))	n24,
	rownum				id,
	rpad('x',40)			padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5 -- > comment to avoid WordPress format issue
;


alter table t1 add constraint t1_pk primary key(id)

create table t2
as
with generator as (
	select	--+ materialize
		rownum 	id
	from	all_objects 
	where	rownum <= 3000 -- > comment to avoid WordPress format issue
)
select
	mod(rownum,trunc(5000/1))	n01,
	mod(rownum,trunc(5000/2))	n02,
	mod(rownum,trunc(5000/3))	n03,
	mod(rownum,trunc(5000/4))	n04,
	mod(rownum,trunc(5000/5))	n05,
	mod(rownum,trunc(5000/6))	n06,
	mod(rownum,trunc(5000/7))	n07,
	mod(rownum,trunc(5000/8))	n08,
	mod(rownum,trunc(5000/9))	n09,
	mod(rownum,trunc(5000/10))	n10,
	mod(rownum,trunc(5000/11))	n11,
	mod(rownum,trunc(5000/12))	n12,
	mod(rownum,trunc(5000/13))	n13,
	mod(rownum,trunc(5000/14))	n14,
	mod(rownum,trunc(5000/15))	n15,
	mod(rownum,trunc(5000/16))	n16,
	mod(rownum,trunc(5000/17))	n17,
	mod(rownum,trunc(5000/18))	n18,
	mod(rownum,trunc(5000/19))	n19,
	mod(rownum,trunc(5000/20))	n20,
	mod(rownum,trunc(5000/21))	n21,
	mod(rownum,trunc(5000/22))	n22,
	mod(rownum,trunc(5000/23))	n23,
	mod(rownum,trunc(5000/24))	n24,
	rownum				id,
	rpad('x',40)			padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5 -- > comment to avoid WordPress format issue
;

create index t2_a21 on t2(n21);
create index t2_a22 on t2(n22);
create index t2_a23 on t2(n23);
create index t2_a24 on t2(n24);

create index t2_z01 on t2(n01);
create index t2_z02 on t2(n02);
create index t2_z03 on t2(n03);
create index t2_z04 on t2(n04);
create index t2_z05 on t2(n05);
create index t2_z06 on t2(n06);
create index t2_z07 on t2(n07);
create index t2_z08 on t2(n08);
create index t2_z09 on t2(n09);
create index t2_z10 on t2(n10);

create index t2_i11 on t2(n11);
create index t2_i12 on t2(n12);
create index t2_i13 on t2(n13);
create index t2_i14 on t2(n14);
create index t2_i15 on t2(n15);
create index t2_i16 on t2(n16);
create index t2_i17 on t2(n17);
create index t2_i18 on t2(n18);
create index t2_i19 on t2(n19);
create index t2_i20 on t2(n20);

alter index t2_a21 rebuild;
alter index t2_a22 rebuild;
alter index t2_a23 rebuild;
alter index t2_a24 rebuild;
 

begin
        dbms_stats.gather_table_stats(
                ownname 	 => user,
		tabname		 =>'t1',
		method_opt 	 => 'for all columns size 1',
		cascade		 => true
	);

	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'t2',
		method_opt 	 => 'for all columns size 1',
		cascade		 => true
	);

end;
/

I’m going to use a loopback database link to join “local” table t1 to “remote” table t2 on all 24 of the nXX columns. I’ve created indexes on all the columns, messing around with index names, order of creation, and rebuilding, to cover possible selection criteria such as alphabetical order, ordering by data_object_id (rather than object_id), even ordering by name of indexed columns(!).

Now the code to run a test:


define m_target=orcl@loopback

alter session set events '10053 trace name context forever';
set serveroutput off

select
	t1.id,
	t2.id,
	t2.padding
from
	t1			t1,
	t2@&m_target		t2
where
	t1.id = 99
and	t2.n01 = t1.n01
and	t2.n02 = t1.n02
and	t2.n03 = t1.n03
and	t2.n04 = t1.n04
and	t2.n05 = t1.n05
and	t2.n06 = t1.n06
and	t2.n07 = t1.n07
and	t2.n08 = t1.n08
and	t2.n09 = t1.n09
and	t2.n10 = t1.n10
/*			*/
and	t2.n11 = t1.n11
and	t2.n12 = t1.n12
and	t2.n13 = t1.n13
and	t2.n14 = t1.n14
and	t2.n15 = t1.n15
and	t2.n16 = t1.n16
and	t2.n17 = t1.n17
and	t2.n18 = t1.n18
and	t2.n19 = t1.n19
and	t2.n20 = t1.n20
/*			*/
and	t2.n21 = t1.n21
and	t2.n22 = t1.n22
and	t2.n23 = t1.n23
and	t2.n24 = t1.n24
;

select * from table(dbms_xplan.display_cursor(null,null,'outline'));

alter session set events '10053 trace name context off';

I’ve used a substitution variable for the name of the database link – it’s a convenience I have with all my distributed tests, a list of possible defines at the top of the script depending on which database I happen to be using at the time – then enabled the optimizer (10053) trace, set serveroutput off so that I can pull the execution plan from memory most easily, then executed the query.

Here’s the execution plan – including the Remote section and Outline.


-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   270 (100)|          |        |      |
|   1 |  NESTED LOOPS      |      |     1 |   243 |   270   (6)| 00:00:01 |        |      |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |   101 |   268   (6)| 00:00:01 |        |      |
|   3 |   REMOTE           | T2   |     1 |   142 |     2   (0)| 00:00:01 | ORCL@~ | R->S |
-------------------------------------------------------------------------------------------


Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      FULL(@"SEL$1" "T2"@"SEL$1")
      LEADING(@"SEL$1" "T1"@"SEL$1" "T2"@"SEL$1")
      USE_NL(@"SEL$1" "T2"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."ID"=99)

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "N01","N02","N03","N04","N05","N06","N07","N08","N09","N10","N11","N1
       2","N13","N14","N15","N16","N17","N18","N19","N20","N21","N22","N23","N24","ID","PA
       DDING" FROM "T2" "T2" WHERE "N01"=:1 AND "N02"=:2 AND "N03"=:3 AND "N04"=:4 AND
       "N05"=:5 AND "N06"=:6 AND "N07"=:7 AND "N08"=:8 AND "N09"=:9 AND "N10"=:10 AND
       "N11"=:11 AND "N12"=:12 AND "N13"=:13 AND "N14"=:14 AND "N15"=:15 AND "N16"=:16
       AND "N17"=:17 AND "N18"=:18 AND "N19"=:19 AND "N20"=:20 AND "N21"=:21 AND
       "N22"=:22 AND "N23"=:23 AND "N24"=:24 (accessing 'ORCL@LOOPBACK' )

There’s a little oddity with the plan – specifically in the Outline: there’s a “full(t2)” hint which is clearly inappropriate and isn’t consistent with the cost of 2 for the REMOTE operation reported in the body of the plan. Fortunately the SQL forwarded to the “remote” database doesn’t include this hint and (you’ll have to take my word for it) used an indexed access path into the table.

Where, though, is the indication that Oracle considered only 20 indexes? It’s in the 10053 trace file under the “Base Statistical Information” section in the subsection headed “Index Stats”:


Index Stats::
  Index: 0  Col#: 20    (NOT ANALYZED)
  LVLS: 1  #LB: 204  #DK: 250  LB/K: 1.00  DB/K: 400.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 19    (NOT ANALYZED)
  LVLS: 1  #LB: 204  #DK: 263  LB/K: 1.00  DB/K: 380.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 18    (NOT ANALYZED)
  LVLS: 1  #LB: 205  #DK: 277  LB/K: 1.00  DB/K: 361.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 17    (NOT ANALYZED)
  LVLS: 1  #LB: 205  #DK: 294  LB/K: 1.00  DB/K: 340.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 16    (NOT ANALYZED)
  LVLS: 1  #LB: 205  #DK: 312  LB/K: 1.00  DB/K: 320.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 15    (NOT ANALYZED)
  LVLS: 1  #LB: 205  #DK: 333  LB/K: 1.00  DB/K: 300.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 14    (NOT ANALYZED)
  LVLS: 1  #LB: 206  #DK: 357  LB/K: 1.00  DB/K: 280.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 13    (NOT ANALYZED)
  LVLS: 1  #LB: 206  #DK: 384  LB/K: 1.00  DB/K: 260.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 12    (NOT ANALYZED)
  LVLS: 1  #LB: 206  #DK: 416  LB/K: 1.00  DB/K: 240.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 11    (NOT ANALYZED)
  LVLS: 1  #LB: 206  #DK: 454  LB/K: 1.00  DB/K: 220.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 10    (NOT ANALYZED)
  LVLS: 1  #LB: 207  #DK: 500  LB/K: 1.00  DB/K: 200.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 9    (NOT ANALYZED)
  LVLS: 1  #LB: 207  #DK: 555  LB/K: 1.00  DB/K: 180.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 8    (NOT ANALYZED)
  LVLS: 1  #LB: 207  #DK: 625  LB/K: 1.00  DB/K: 160.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 7    (NOT ANALYZED)
  LVLS: 1  #LB: 208  #DK: 714  LB/K: 1.00  DB/K: 140.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 6    (NOT ANALYZED)
  LVLS: 1  #LB: 208  #DK: 833  LB/K: 1.00  DB/K: 120.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 5    (NOT ANALYZED)
  LVLS: 1  #LB: 208  #DK: 1000  LB/K: 1.00  DB/K: 100.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 4    (NOT ANALYZED)
  LVLS: 1  #LB: 208  #DK: 1250  LB/K: 1.00  DB/K: 80.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 3    (NOT ANALYZED)
  LVLS: 1  #LB: 209  #DK: 1666  LB/K: 1.00  DB/K: 60.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 2    (NOT ANALYZED)
  LVLS: 1  #LB: 209  #DK: 2500  LB/K: 1.00  DB/K: 40.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 1    (NOT ANALYZED)
  LVLS: 1  #LB: 209  #DK: 5000  LB/K: 1.00  DB/K: 20.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 

We have 20 indexes listed, and while they’re all called “Index 0” (and reported as “Not Analyzed”) we can see from their column definitions that they are (in reverse order) the indexes on columns n01 through to n20 – i.e. the last 20 indexes created. The optimizer has created its plan based only on its knowledge of these indexes.

We might ask whether this matters or not – after all when the remote SQL gets to the remote database the remote optimizer is going to (re-)optimize it anyway and do the best it can with it, so at run-time Oracle could still end up using remote indexes that the local optimizer didn’t know about. So let’s get nasty and give the local optimizer a problem:


create index t2_id on t2(id);

select
        t1.id,
        t2.id,
        t2.padding
from
        t1                      t1,
        t2@&m_target            t2
where
        t1.id = 99
and     t2.n01 = t1.n01
;

I’ve created one more index on t2, which means the local optimizer is going to “forget” about the index that was the previous 20th index on the most recently created list for t2. That’s the index on (n01), which would have been a very good index for this query. If this query were to run locally the optimizer would do a nested loop from t1 to t2 using the index on (n01) – but the optimizer no longer knows about that index, so we get the following plan:


-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   538 (100)|          |        |      |
|*  1 |  HASH JOIN         |      |    20 |  1140 |   538   (7)| 00:00:01 |        |      |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |     9 |   268   (6)| 00:00:01 |        |      |
|   3 |   REMOTE           | T2   |   100K|  4687K|   268   (6)| 00:00:01 | ORCL@~ | R->S |
-------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      FULL(@"SEL$1" "T2"@"SEL$1")
      LEADING(@"SEL$1" "T1"@"SEL$1" "T2"@"SEL$1")
      USE_HASH(@"SEL$1" "T2"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N01"="T1"."N01")
   2 - filter("T1"."ID"=99)

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "N01","ID","PADDING" FROM "T2" "T2" (accessing 'ORCL@LOOPBACK' )

Oracle is going to do a hash join and apply the join predicate late. Although the remote optimizer can sometimes rescue us from a mistake made by the local optimizer and use indexes that the local optimizer doesn’t know about, there are times when the remote SQL generated by the local optimizer is so rigidly associated with the expected plan that there’s no way the remote optimizer can workaround the assumptions made by the local optimizer.

So when you create (or drop and recreate) an index, it’s just possible that a distributed plan will have to change because the local optimizer is no longer aware of an index that exists at the remote site.

tl;dr

Be very cautious about dropping and recreating indexes if the table in question

  1. has more than 20 indexes
  2. and is used at the remote end of a distributed execution plan

The optimizer will consider only 20 of the indexes on the table, choosing the ones with the highest object_ids. If you drop and recreate an index then it gets a new (highest) object_id and a plan may change because the index that Oracle was previously using is no longer in the top 20.

Oracle 18c Security utl_file_dir and schema no authentication

Pete Finnigan - Tue, 2018-05-08 06:26
I have managed to build an 18c database this weekend to test and learn on. I have not had a massive time to look into 18c yet but I will do over the coming days and weeks. The new features....[Read More]

Posted by Pete On 07/05/18 At 09:10 PM

Categories: Security Blogs

18c Scalable Sequences Part III (Too Much Rope)

Richard Foote - Tue, 2018-05-08 02:06
I previously looked in Part I and Part II how Scalable Sequences officially released in 18c can reduce index contention issues, by automatically assigning a 6 digit prefix to the sequence value based on the instance ID and session ID of the session. We need to be careful and consider this 6 digit prefix if […]
Categories: DBA Blogs

All About Oracle Autonomous Data Warehouse Cloud

Gerger Consulting - Tue, 2018-05-08 00:06
Oracle has just released their Autonomous Data Warehouse Cloud Service (ADW). There is no one better to tell you all about it than the Senior Principal Product Manager Yasin Baskan who's been leading the project.


Attend the free webinar by ADW Senior Principal Product Manager Yasin Baskan on May 22nd and learn all about the Autonomous DWH Cloud from the people who created it.


About the Webinar

Oracle’s new cloud service, Autonomous Data Warehouse Cloud, is now generally available. This is a fully-managed service that automates many day-to-day operations for DBAs and DWH developers.

We will talk about the functionality Autonomous Data Warehouse provides and go into details about the technical aspects of the service.

You will be able to learn how you can provision new databases, load data, and run queries. We will walk through the automated functionality like database configuration, security, backup and recovery, and performance. This session will provide you the information you need to start trying out the service.

Sign up for the webinar.

About the Presenter

Yasin has been with Oracle for nine years in different roles such as pre-sales and product management. Before working for Oracle, he had been an Oracle DBA in the financial industry starting with Oracle Database version 7.3. He has 20 years of IT and Oracle Database experience. Since 2014 he has been part of the data warehousing product management team at Oracle HQ.

In his current role, he is leading the Autonomous Data Warehouse Cloud as the Senior Principal Product Manager.

Sign up for the webinar.
Categories: Development

EBS 12 Certified with Safari 11 and macOS High Sierra 10.13

Steven Chan - Mon, 2018-05-07 17:03

Oracle E-Business Suite Release 12 (12.1.3, 12.2.4 or higher) is now certified with Safari 11 on macOS High Sierra 10.13 with the following desktop configuration:

  • macOS High Sierra version 10.13.3 or higher
  • Safari version 11 (11.0.3 or higher)
  • Oracle JRE 8 plugin (1.8.0_171 or higher)

Users should review all relevant information along with other specific patching requirements and known limitations posted here:

Related Articles

Categories: APPS Blogs

A New Oracle Autonomous Visual Builder Cloud Service - Visual and Coding Combined

OTN TechBlog - Mon, 2018-05-07 14:39

We are happy to announce the availability of Oracle Autonomous Visual Builder Cloud Service (VBCS) - Oracle's visual low-code development platform for JavaScript based applications with built-in autonomous capabilities.

Over the past couple of years, the visual development approach of VBCS has made it a very attractive solution to citizen developers who leveraged the no-code required nature of the platform to build their custom applications.

Many professional developers also expressed interest in the visual development experience they saw, but they were looking for additional capabilities.

Specifically developers were demanding an option to have direct access to the code that the visual tools created so they can change it and enhance it with their own custom code to achieve richer behaviors.

With the new VBCS version we are addressing these demands adding direct access to manipulate code, while keeping the low-code characteristics of VBCS.

Visual and Code Based Development Combined

Just like in previous versions, constructing the UI is done through a visual WYSIWYG layout editor. Existing VBCS users will notice that they now have access to a much richer set of UI components in the component palette. In fact they now have access to all of the components offered by Oracle JET (Oracle's open-source JavaScript Extension Toolkit). In addition you can add more components to the palette using the Web-components standard based Oracle JET composite components architecture (CCA).

The thing to note about the visual editor is the new "Code" button at the top right, clicking this button will give professional developers direct access to the HTML code that makes up the page layout.  They'll be happy to discover that the code is pure HTML/JavaScript/CSS based - which will let them leverage their existing expertise to further enhance and customize it. Developers can directly manipulate that code through the smart code editor leveraging features such as code insight, syntax highlighting, doc access, and reformatting directly in their browser.

The visual development approach is not limited to page layouts. We extend it also to the way you can define business logic. Defining the flow of your logic is done through our new action flow editor. With a collection of operations that you can define in a declarative way, and the ability to invoke your specific JavaScript code for unique functionality.

Now that developers have direct access to the code, we also added integration with Git, leveraging the private Git repositories provided through Oracle Developer Cloud Service (DevCS). Teams can now leverage the full set of Agile methodology capabilities of DevCS when working on VBCS applications, including issue tracking, version management, agile planning and code review processes.

Mobile and Web Development Unified

With the new version of VBCS we further integrated the development experience across both web browser-based and on-device mobile applications. 

In the same project you can create both types of applications, leveraging the same development approach, application architecture, UI components, and access to custom business objects and external REST services.

Once you are done developing your mobile application, we'll package it for you as an on-device mobile app that you install, test, and run on your devices - leveraging the native look and feel provided by Oracle JET for the various mobile platforms.

Standard-Based Data Openness

With the new version you can now hook up VBCS to any REST data source with a few button clicks, leveraging a declarative approach to consuming external REST source in your application. VBCS is able to parse standard Swagger based service descriptors for easy consumption. Even if you don't have a detailed structure description for a service, the declarative dialog in VBCS makes it easy to define the access to any service, including security settings, header and URL parameters, and more. VBCS is smart enough to parse the structure returned from the service and create variables that will allow you to access the data in your UI with ease.

Let's not forget that VBCS also lets you define your own custom reusable business services. VBCS will create the database objects to store the information in these objects, and will provide you with a powerful secure set of REST services to allow you to access these objects from both your VBCS and external applications.

Visual Builder Cloud Service Goes Autonomous

Today’s Visual Builder Cloud Service release also has built-in autonomous capabilities to automate and eliminate repetitive tasks so you can instead focus on app design and development.

Configuring and provisioning your service is as easy as a single button click.All you need to do is tell us the name you want for your server, and with a click of a button everything is configured for you. You don't need to install and configure your underlying platform - the service automatically provision for you a database, an app hosting server, and your full development platform.

One click install

The new autonomous VBCS eliminates any manual tasks for the maintenance of your development and deployment platforms. Once your service is provisioned we'll take care of things like patching, updates, and backups for you.

Furthermore autonomous VBCS automatically maintains your mobile app publishing infrastructure. You just need to click a button and we'll publish your mobile app to iOS or Android packages, and host your web app on our scalable backend services that host your data and your applications.

But Wait There is More

There are many other new features you'll find in the new version of Oracle Visual Builder Cloud Service. Whether you are a seasoned JavaScript expert looking to accelerate your delivery, a developer taking your first steps in the wild world of JavaScript development, or a citizen developer looking to build your business application - Visual Builder has something for you.

So take it for a spin - we are sure you are going to enjoy the experience.

For more information and to get your free trial visit us at http://cloud.oracle.com/visual-builder

 

 

Pages

Subscribe to Oracle FAQ aggregator